00:00:00.001 Started by upstream project "autotest-nightly" build number 4281 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3644 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.167 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.168 The recommended git tool is: git 00:00:00.168 using credential 00000000-0000-0000-0000-000000000002 00:00:00.170 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.223 Fetching changes from the remote Git repository 00:00:00.224 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.265 Using shallow fetch with depth 1 00:00:00.265 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.265 > git --version # timeout=10 00:00:00.295 > git --version # 'git version 2.39.2' 00:00:00.295 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.315 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.315 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:08.257 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:08.269 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:08.282 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:08.282 > git config core.sparsecheckout # timeout=10 00:00:08.294 > git read-tree -mu HEAD # timeout=10 00:00:08.308 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:08.327 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:08.327 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:08.414 [Pipeline] Start of Pipeline 00:00:08.425 [Pipeline] library 00:00:08.427 Loading library shm_lib@master 00:00:08.427 Library shm_lib@master is cached. Copying from home. 00:00:08.444 [Pipeline] node 00:00:08.466 Running on VM-host-SM9 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:08.467 [Pipeline] { 00:00:08.478 [Pipeline] catchError 00:00:08.480 [Pipeline] { 00:00:08.492 [Pipeline] wrap 00:00:08.499 [Pipeline] { 00:00:08.507 [Pipeline] stage 00:00:08.508 [Pipeline] { (Prologue) 00:00:08.526 [Pipeline] echo 00:00:08.528 Node: VM-host-SM9 00:00:08.534 [Pipeline] cleanWs 00:00:08.544 [WS-CLEANUP] Deleting project workspace... 00:00:08.544 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.550 [WS-CLEANUP] done 00:00:08.748 [Pipeline] setCustomBuildProperty 00:00:08.857 [Pipeline] httpRequest 00:00:09.215 [Pipeline] echo 00:00:09.216 Sorcerer 10.211.164.20 is alive 00:00:09.223 [Pipeline] retry 00:00:09.224 [Pipeline] { 00:00:09.235 [Pipeline] httpRequest 00:00:09.239 HttpMethod: GET 00:00:09.240 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.240 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.255 Response Code: HTTP/1.1 200 OK 00:00:09.255 Success: Status code 200 is in the accepted range: 200,404 00:00:09.256 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:34.953 [Pipeline] } 00:00:34.970 [Pipeline] // retry 00:00:34.978 [Pipeline] sh 00:00:35.259 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:35.276 [Pipeline] httpRequest 00:00:35.655 [Pipeline] echo 00:00:35.657 Sorcerer 10.211.164.20 is alive 00:00:35.665 [Pipeline] retry 00:00:35.667 [Pipeline] { 00:00:35.680 [Pipeline] httpRequest 00:00:35.684 HttpMethod: GET 00:00:35.685 URL: http://10.211.164.20/packages/spdk_d47eb51c960b88a8c704cc184fd594dbc3abad70.tar.gz 00:00:35.685 Sending request to url: http://10.211.164.20/packages/spdk_d47eb51c960b88a8c704cc184fd594dbc3abad70.tar.gz 00:00:35.687 Response Code: HTTP/1.1 200 OK 00:00:35.687 Success: Status code 200 is in the accepted range: 200,404 00:00:35.688 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk_d47eb51c960b88a8c704cc184fd594dbc3abad70.tar.gz 00:00:55.746 [Pipeline] } 00:00:55.764 [Pipeline] // retry 00:00:55.772 [Pipeline] sh 00:00:56.057 + tar --no-same-owner -xf spdk_d47eb51c960b88a8c704cc184fd594dbc3abad70.tar.gz 00:00:59.361 [Pipeline] sh 00:00:59.641 + git -C spdk log --oneline -n5 00:00:59.641 d47eb51c9 bdev: fix a race between reset start and complete 00:00:59.641 83e8405e4 nvmf/fc: Qpair disconnect callback: Serialize FC delete connection & close qpair process 00:00:59.641 0eab4c6fb nvmf/fc: Validate the ctrlr pointer inside nvmf_fc_req_bdev_abort() 00:00:59.641 4bcab9fb9 correct kick for CQ full case 00:00:59.642 8531656d3 test/nvmf: Interrupt test for local pcie nvme device 00:00:59.662 [Pipeline] writeFile 00:00:59.677 [Pipeline] sh 00:00:59.959 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:59.971 [Pipeline] sh 00:01:00.252 + cat autorun-spdk.conf 00:01:00.252 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:00.252 SPDK_TEST_NVMF=1 00:01:00.252 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:00.252 SPDK_TEST_URING=1 00:01:00.252 SPDK_TEST_VFIOUSER=1 00:01:00.252 SPDK_TEST_USDT=1 00:01:00.252 SPDK_RUN_ASAN=1 00:01:00.252 SPDK_RUN_UBSAN=1 00:01:00.252 NET_TYPE=virt 00:01:00.252 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:00.260 RUN_NIGHTLY=1 00:01:00.262 [Pipeline] } 00:01:00.275 [Pipeline] // stage 00:01:00.291 [Pipeline] stage 00:01:00.293 [Pipeline] { (Run VM) 00:01:00.306 [Pipeline] sh 00:01:00.586 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:00.586 + echo 'Start stage prepare_nvme.sh' 00:01:00.586 Start stage prepare_nvme.sh 00:01:00.586 + [[ -n 4 ]] 00:01:00.586 + disk_prefix=ex4 00:01:00.586 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest ]] 00:01:00.586 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf ]] 00:01:00.586 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf 00:01:00.586 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:00.586 ++ SPDK_TEST_NVMF=1 00:01:00.586 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:00.586 ++ SPDK_TEST_URING=1 00:01:00.586 ++ SPDK_TEST_VFIOUSER=1 00:01:00.586 ++ SPDK_TEST_USDT=1 00:01:00.586 ++ SPDK_RUN_ASAN=1 00:01:00.586 ++ SPDK_RUN_UBSAN=1 00:01:00.586 ++ NET_TYPE=virt 00:01:00.586 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:00.586 ++ RUN_NIGHTLY=1 00:01:00.586 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:00.586 + nvme_files=() 00:01:00.586 + declare -A nvme_files 00:01:00.586 + backend_dir=/var/lib/libvirt/images/backends 00:01:00.586 + nvme_files['nvme.img']=5G 00:01:00.586 + nvme_files['nvme-cmb.img']=5G 00:01:00.586 + nvme_files['nvme-multi0.img']=4G 00:01:00.586 + nvme_files['nvme-multi1.img']=4G 00:01:00.586 + nvme_files['nvme-multi2.img']=4G 00:01:00.586 + nvme_files['nvme-openstack.img']=8G 00:01:00.586 + nvme_files['nvme-zns.img']=5G 00:01:00.586 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:00.586 + (( SPDK_TEST_FTL == 1 )) 00:01:00.586 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:00.586 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:00.586 + for nvme in "${!nvme_files[@]}" 00:01:00.586 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi2.img -s 4G 00:01:00.586 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:00.586 + for nvme in "${!nvme_files[@]}" 00:01:00.586 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-cmb.img -s 5G 00:01:00.586 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:00.586 + for nvme in "${!nvme_files[@]}" 00:01:00.586 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-openstack.img -s 8G 00:01:00.586 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:00.586 + for nvme in "${!nvme_files[@]}" 00:01:00.586 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-zns.img -s 5G 00:01:00.586 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:00.586 + for nvme in "${!nvme_files[@]}" 00:01:00.586 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi1.img -s 4G 00:01:00.845 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:00.845 + for nvme in "${!nvme_files[@]}" 00:01:00.845 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi0.img -s 4G 00:01:00.845 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:00.845 + for nvme in "${!nvme_files[@]}" 00:01:00.845 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme.img -s 5G 00:01:00.845 Formatting '/var/lib/libvirt/images/backends/ex4-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:00.845 ++ sudo grep -rl ex4-nvme.img /etc/libvirt/qemu 00:01:00.845 + echo 'End stage prepare_nvme.sh' 00:01:00.845 End stage prepare_nvme.sh 00:01:00.857 [Pipeline] sh 00:01:01.139 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:01.139 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex4-nvme.img -b /var/lib/libvirt/images/backends/ex4-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img -H -a -v -f fedora39 00:01:01.398 00:01:01.398 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant 00:01:01.398 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 00:01:01.398 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:01.398 HELP=0 00:01:01.398 DRY_RUN=0 00:01:01.398 NVME_FILE=/var/lib/libvirt/images/backends/ex4-nvme.img,/var/lib/libvirt/images/backends/ex4-nvme-multi0.img, 00:01:01.398 NVME_DISKS_TYPE=nvme,nvme, 00:01:01.398 NVME_AUTO_CREATE=0 00:01:01.398 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img, 00:01:01.398 NVME_CMB=,, 00:01:01.398 NVME_PMR=,, 00:01:01.398 NVME_ZNS=,, 00:01:01.398 NVME_MS=,, 00:01:01.398 NVME_FDP=,, 00:01:01.398 SPDK_VAGRANT_DISTRO=fedora39 00:01:01.398 SPDK_VAGRANT_VMCPU=10 00:01:01.398 SPDK_VAGRANT_VMRAM=12288 00:01:01.398 SPDK_VAGRANT_PROVIDER=libvirt 00:01:01.398 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:01.398 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:01.398 SPDK_OPENSTACK_NETWORK=0 00:01:01.398 VAGRANT_PACKAGE_BOX=0 00:01:01.398 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:01.398 FORCE_DISTRO=true 00:01:01.398 VAGRANT_BOX_VERSION= 00:01:01.398 EXTRA_VAGRANTFILES= 00:01:01.398 NIC_MODEL=e1000 00:01:01.398 00:01:01.398 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt' 00:01:01.398 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:04.687 Bringing machine 'default' up with 'libvirt' provider... 00:01:04.947 ==> default: Creating image (snapshot of base box volume). 00:01:05.206 ==> default: Creating domain with the following settings... 00:01:05.206 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1731973511_27a194f09a99270da03e 00:01:05.206 ==> default: -- Domain type: kvm 00:01:05.206 ==> default: -- Cpus: 10 00:01:05.206 ==> default: -- Feature: acpi 00:01:05.206 ==> default: -- Feature: apic 00:01:05.206 ==> default: -- Feature: pae 00:01:05.206 ==> default: -- Memory: 12288M 00:01:05.206 ==> default: -- Memory Backing: hugepages: 00:01:05.206 ==> default: -- Management MAC: 00:01:05.206 ==> default: -- Loader: 00:01:05.206 ==> default: -- Nvram: 00:01:05.206 ==> default: -- Base box: spdk/fedora39 00:01:05.206 ==> default: -- Storage pool: default 00:01:05.206 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1731973511_27a194f09a99270da03e.img (20G) 00:01:05.206 ==> default: -- Volume Cache: default 00:01:05.206 ==> default: -- Kernel: 00:01:05.206 ==> default: -- Initrd: 00:01:05.206 ==> default: -- Graphics Type: vnc 00:01:05.206 ==> default: -- Graphics Port: -1 00:01:05.206 ==> default: -- Graphics IP: 127.0.0.1 00:01:05.206 ==> default: -- Graphics Password: Not defined 00:01:05.206 ==> default: -- Video Type: cirrus 00:01:05.206 ==> default: -- Video VRAM: 9216 00:01:05.206 ==> default: -- Sound Type: 00:01:05.206 ==> default: -- Keymap: en-us 00:01:05.206 ==> default: -- TPM Path: 00:01:05.206 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:05.206 ==> default: -- Command line args: 00:01:05.206 ==> default: -> value=-device, 00:01:05.206 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:05.206 ==> default: -> value=-drive, 00:01:05.206 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme.img,if=none,id=nvme-0-drive0, 00:01:05.206 ==> default: -> value=-device, 00:01:05.206 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:05.207 ==> default: -> value=-device, 00:01:05.207 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:05.207 ==> default: -> value=-drive, 00:01:05.207 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:05.207 ==> default: -> value=-device, 00:01:05.207 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:05.207 ==> default: -> value=-drive, 00:01:05.207 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:05.207 ==> default: -> value=-device, 00:01:05.207 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:05.207 ==> default: -> value=-drive, 00:01:05.207 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:05.207 ==> default: -> value=-device, 00:01:05.207 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:05.207 ==> default: Creating shared folders metadata... 00:01:05.207 ==> default: Starting domain. 00:01:06.587 ==> default: Waiting for domain to get an IP address... 00:01:24.716 ==> default: Waiting for SSH to become available... 00:01:26.096 ==> default: Configuring and enabling network interfaces... 00:01:30.290 default: SSH address: 192.168.121.194:22 00:01:30.290 default: SSH username: vagrant 00:01:30.290 default: SSH auth method: private key 00:01:32.826 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:41.040 ==> default: Mounting SSHFS shared folder... 00:01:41.977 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:41.977 ==> default: Checking Mount.. 00:01:43.356 ==> default: Folder Successfully Mounted! 00:01:43.356 ==> default: Running provisioner: file... 00:01:43.930 default: ~/.gitconfig => .gitconfig 00:01:44.189 00:01:44.189 SUCCESS! 00:01:44.189 00:01:44.189 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:01:44.189 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:44.189 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:01:44.189 00:01:44.198 [Pipeline] } 00:01:44.213 [Pipeline] // stage 00:01:44.221 [Pipeline] dir 00:01:44.221 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt 00:01:44.223 [Pipeline] { 00:01:44.234 [Pipeline] catchError 00:01:44.236 [Pipeline] { 00:01:44.247 [Pipeline] sh 00:01:44.527 + vagrant ssh-config --host vagrant 00:01:44.527 + sed -ne /^Host/,$p 00:01:44.527 + tee ssh_conf 00:01:47.816 Host vagrant 00:01:47.816 HostName 192.168.121.194 00:01:47.816 User vagrant 00:01:47.816 Port 22 00:01:47.816 UserKnownHostsFile /dev/null 00:01:47.816 StrictHostKeyChecking no 00:01:47.816 PasswordAuthentication no 00:01:47.816 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:47.816 IdentitiesOnly yes 00:01:47.816 LogLevel FATAL 00:01:47.816 ForwardAgent yes 00:01:47.816 ForwardX11 yes 00:01:47.816 00:01:47.831 [Pipeline] withEnv 00:01:47.834 [Pipeline] { 00:01:47.860 [Pipeline] sh 00:01:48.140 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:48.140 source /etc/os-release 00:01:48.140 [[ -e /image.version ]] && img=$(< /image.version) 00:01:48.140 # Minimal, systemd-like check. 00:01:48.140 if [[ -e /.dockerenv ]]; then 00:01:48.140 # Clear garbage from the node's name: 00:01:48.140 # agt-er_autotest_547-896 -> autotest_547-896 00:01:48.140 # $HOSTNAME is the actual container id 00:01:48.140 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:48.140 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:48.140 # We can assume this is a mount from a host where container is running, 00:01:48.140 # so fetch its hostname to easily identify the target swarm worker. 00:01:48.140 container="$(< /etc/hostname) ($agent)" 00:01:48.140 else 00:01:48.140 # Fallback 00:01:48.140 container=$agent 00:01:48.140 fi 00:01:48.140 fi 00:01:48.140 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:48.140 00:01:48.412 [Pipeline] } 00:01:48.429 [Pipeline] // withEnv 00:01:48.439 [Pipeline] setCustomBuildProperty 00:01:48.454 [Pipeline] stage 00:01:48.456 [Pipeline] { (Tests) 00:01:48.473 [Pipeline] sh 00:01:48.755 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:49.027 [Pipeline] sh 00:01:49.364 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:49.379 [Pipeline] timeout 00:01:49.379 Timeout set to expire in 1 hr 0 min 00:01:49.381 [Pipeline] { 00:01:49.396 [Pipeline] sh 00:01:49.676 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:50.244 HEAD is now at d47eb51c9 bdev: fix a race between reset start and complete 00:01:50.256 [Pipeline] sh 00:01:50.537 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:50.810 [Pipeline] sh 00:01:51.092 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:51.109 [Pipeline] sh 00:01:51.390 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:01:51.649 ++ readlink -f spdk_repo 00:01:51.649 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:51.649 + [[ -n /home/vagrant/spdk_repo ]] 00:01:51.649 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:51.649 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:51.649 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:51.649 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:51.649 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:51.649 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:01:51.649 + cd /home/vagrant/spdk_repo 00:01:51.649 + source /etc/os-release 00:01:51.649 ++ NAME='Fedora Linux' 00:01:51.649 ++ VERSION='39 (Cloud Edition)' 00:01:51.649 ++ ID=fedora 00:01:51.649 ++ VERSION_ID=39 00:01:51.649 ++ VERSION_CODENAME= 00:01:51.649 ++ PLATFORM_ID=platform:f39 00:01:51.649 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:51.649 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:51.649 ++ LOGO=fedora-logo-icon 00:01:51.649 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:51.649 ++ HOME_URL=https://fedoraproject.org/ 00:01:51.649 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:51.649 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:51.649 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:51.649 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:51.649 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:51.649 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:51.649 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:51.649 ++ SUPPORT_END=2024-11-12 00:01:51.649 ++ VARIANT='Cloud Edition' 00:01:51.649 ++ VARIANT_ID=cloud 00:01:51.649 + uname -a 00:01:51.649 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:51.649 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:51.907 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:51.907 Hugepages 00:01:51.907 node hugesize free / total 00:01:51.907 node0 1048576kB 0 / 0 00:01:52.166 node0 2048kB 0 / 0 00:01:52.166 00:01:52.166 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:52.166 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:52.166 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:52.166 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:52.166 + rm -f /tmp/spdk-ld-path 00:01:52.166 + source autorun-spdk.conf 00:01:52.167 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:52.167 ++ SPDK_TEST_NVMF=1 00:01:52.167 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:52.167 ++ SPDK_TEST_URING=1 00:01:52.167 ++ SPDK_TEST_VFIOUSER=1 00:01:52.167 ++ SPDK_TEST_USDT=1 00:01:52.167 ++ SPDK_RUN_ASAN=1 00:01:52.167 ++ SPDK_RUN_UBSAN=1 00:01:52.167 ++ NET_TYPE=virt 00:01:52.167 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:52.167 ++ RUN_NIGHTLY=1 00:01:52.167 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:52.167 + [[ -n '' ]] 00:01:52.167 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:52.167 + for M in /var/spdk/build-*-manifest.txt 00:01:52.167 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:52.167 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:52.167 + for M in /var/spdk/build-*-manifest.txt 00:01:52.167 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:52.167 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:52.167 + for M in /var/spdk/build-*-manifest.txt 00:01:52.167 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:52.167 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:52.167 ++ uname 00:01:52.167 + [[ Linux == \L\i\n\u\x ]] 00:01:52.167 + sudo dmesg -T 00:01:52.167 + sudo dmesg --clear 00:01:52.167 + dmesg_pid=5251 00:01:52.167 + sudo dmesg -Tw 00:01:52.167 + [[ Fedora Linux == FreeBSD ]] 00:01:52.167 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:52.167 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:52.167 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:52.167 + [[ -x /usr/src/fio-static/fio ]] 00:01:52.167 + export FIO_BIN=/usr/src/fio-static/fio 00:01:52.167 + FIO_BIN=/usr/src/fio-static/fio 00:01:52.167 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:52.167 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:52.167 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:52.167 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:52.167 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:52.167 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:52.167 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:52.167 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:52.167 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:52.426 23:45:58 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:52.426 23:45:58 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:52.426 23:45:58 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:52.426 23:45:58 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:01:52.426 23:45:58 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:52.426 23:45:58 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_URING=1 00:01:52.426 23:45:58 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_TEST_VFIOUSER=1 00:01:52.426 23:45:58 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_TEST_USDT=1 00:01:52.426 23:45:58 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_RUN_ASAN=1 00:01:52.426 23:45:58 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_RUN_UBSAN=1 00:01:52.426 23:45:58 -- spdk_repo/autorun-spdk.conf@9 -- $ NET_TYPE=virt 00:01:52.426 23:45:58 -- spdk_repo/autorun-spdk.conf@10 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:52.426 23:45:58 -- spdk_repo/autorun-spdk.conf@11 -- $ RUN_NIGHTLY=1 00:01:52.426 23:45:58 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:52.426 23:45:58 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:52.426 23:45:58 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:52.426 23:45:58 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:52.426 23:45:58 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:52.426 23:45:58 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:52.426 23:45:58 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:52.426 23:45:58 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:52.426 23:45:58 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:52.426 23:45:58 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:52.426 23:45:58 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:52.426 23:45:58 -- paths/export.sh@5 -- $ export PATH 00:01:52.426 23:45:58 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:52.426 23:45:58 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:52.426 23:45:58 -- common/autobuild_common.sh@486 -- $ date +%s 00:01:52.426 23:45:58 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1731973558.XXXXXX 00:01:52.426 23:45:58 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1731973558.FDKFlx 00:01:52.426 23:45:58 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:01:52.426 23:45:58 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:01:52.426 23:45:58 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:52.426 23:45:58 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:52.426 23:45:58 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:52.426 23:45:58 -- common/autobuild_common.sh@502 -- $ get_config_params 00:01:52.426 23:45:58 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:52.426 23:45:58 -- common/autotest_common.sh@10 -- $ set +x 00:01:52.426 23:45:58 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-vfio-user --with-uring' 00:01:52.426 23:45:58 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:01:52.426 23:45:58 -- pm/common@17 -- $ local monitor 00:01:52.426 23:45:58 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:52.426 23:45:58 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:52.426 23:45:58 -- pm/common@25 -- $ sleep 1 00:01:52.426 23:45:58 -- pm/common@21 -- $ date +%s 00:01:52.426 23:45:58 -- pm/common@21 -- $ date +%s 00:01:52.426 23:45:58 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1731973558 00:01:52.426 23:45:58 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1731973558 00:01:52.426 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1731973558_collect-vmstat.pm.log 00:01:52.426 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1731973558_collect-cpu-load.pm.log 00:01:53.365 23:45:59 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:01:53.365 23:45:59 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:53.365 23:45:59 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:53.365 23:45:59 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:53.365 23:45:59 -- spdk/autobuild.sh@16 -- $ date -u 00:01:53.365 Mon Nov 18 11:45:59 PM UTC 2024 00:01:53.365 23:45:59 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:53.365 v25.01-pre-190-gd47eb51c9 00:01:53.365 23:45:59 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:53.365 23:45:59 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:53.365 23:45:59 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:53.365 23:45:59 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:53.365 23:45:59 -- common/autotest_common.sh@10 -- $ set +x 00:01:53.365 ************************************ 00:01:53.365 START TEST asan 00:01:53.365 ************************************ 00:01:53.365 using asan 00:01:53.365 23:46:00 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:01:53.365 00:01:53.365 real 0m0.000s 00:01:53.365 user 0m0.000s 00:01:53.365 sys 0m0.000s 00:01:53.365 23:46:00 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:53.365 ************************************ 00:01:53.365 END TEST asan 00:01:53.365 ************************************ 00:01:53.365 23:46:00 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:53.365 23:46:00 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:53.365 23:46:00 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:53.365 23:46:00 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:53.365 23:46:00 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:53.365 23:46:00 -- common/autotest_common.sh@10 -- $ set +x 00:01:53.624 ************************************ 00:01:53.624 START TEST ubsan 00:01:53.624 ************************************ 00:01:53.624 using ubsan 00:01:53.624 23:46:00 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:53.624 00:01:53.624 real 0m0.000s 00:01:53.624 user 0m0.000s 00:01:53.624 sys 0m0.000s 00:01:53.624 23:46:00 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:53.624 ************************************ 00:01:53.624 END TEST ubsan 00:01:53.624 ************************************ 00:01:53.624 23:46:00 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:53.624 23:46:00 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:53.624 23:46:00 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:53.624 23:46:00 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:53.624 23:46:00 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:53.624 23:46:00 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:53.624 23:46:00 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:53.624 23:46:00 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:53.624 23:46:00 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:53.624 23:46:00 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-vfio-user --with-uring --with-shared 00:01:53.884 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:53.884 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:54.143 Using 'verbs' RDMA provider 00:02:09.962 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:22.212 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:22.212 Creating mk/config.mk...done. 00:02:22.212 Creating mk/cc.flags.mk...done. 00:02:22.212 Type 'make' to build. 00:02:22.212 23:46:27 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:02:22.212 23:46:27 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:22.212 23:46:27 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:22.212 23:46:27 -- common/autotest_common.sh@10 -- $ set +x 00:02:22.212 ************************************ 00:02:22.213 START TEST make 00:02:22.213 ************************************ 00:02:22.213 23:46:27 make -- common/autotest_common.sh@1129 -- $ make -j10 00:02:22.213 make[1]: Nothing to be done for 'all'. 00:02:22.779 The Meson build system 00:02:22.779 Version: 1.5.0 00:02:22.779 Source dir: /home/vagrant/spdk_repo/spdk/libvfio-user 00:02:22.779 Build dir: /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:02:22.779 Build type: native build 00:02:22.779 Project name: libvfio-user 00:02:22.779 Project version: 0.0.1 00:02:22.779 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:22.779 C linker for the host machine: cc ld.bfd 2.40-14 00:02:22.779 Host machine cpu family: x86_64 00:02:22.779 Host machine cpu: x86_64 00:02:22.779 Run-time dependency threads found: YES 00:02:22.779 Library dl found: YES 00:02:22.779 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:22.779 Run-time dependency json-c found: YES 0.17 00:02:22.779 Run-time dependency cmocka found: YES 1.1.7 00:02:22.779 Program pytest-3 found: NO 00:02:22.779 Program flake8 found: NO 00:02:22.779 Program misspell-fixer found: NO 00:02:22.779 Program restructuredtext-lint found: NO 00:02:22.779 Program valgrind found: YES (/usr/bin/valgrind) 00:02:22.779 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:22.779 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:22.779 Compiler for C supports arguments -Wwrite-strings: YES 00:02:22.779 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:22.779 Program test-lspci.sh found: YES (/home/vagrant/spdk_repo/spdk/libvfio-user/test/test-lspci.sh) 00:02:22.779 Program test-linkage.sh found: YES (/home/vagrant/spdk_repo/spdk/libvfio-user/test/test-linkage.sh) 00:02:22.779 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:22.779 Build targets in project: 8 00:02:22.779 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:22.779 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:22.779 00:02:22.779 libvfio-user 0.0.1 00:02:22.779 00:02:22.779 User defined options 00:02:22.779 buildtype : debug 00:02:22.779 default_library: shared 00:02:22.779 libdir : /usr/local/lib 00:02:22.779 00:02:22.779 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:23.345 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug' 00:02:23.345 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:23.345 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:23.345 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:23.604 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:23.604 [5/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:23.604 [6/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:23.604 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:23.604 [8/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:23.604 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:23.604 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:23.604 [11/37] Compiling C object samples/null.p/null.c.o 00:02:23.604 [12/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:23.604 [13/37] Compiling C object samples/client.p/client.c.o 00:02:23.604 [14/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:23.604 [15/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:23.604 [16/37] Compiling C object samples/server.p/server.c.o 00:02:23.604 [17/37] Linking target samples/client 00:02:23.604 [18/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:23.604 [19/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:23.862 [20/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:23.862 [21/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:23.862 [22/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:23.862 [23/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:23.862 [24/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:23.862 [25/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:23.862 [26/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:23.862 [27/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:23.862 [28/37] Linking target lib/libvfio-user.so.0.0.1 00:02:23.862 [29/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:23.862 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:23.862 [31/37] Linking target test/unit_tests 00:02:23.862 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:24.121 [33/37] Linking target samples/gpio-pci-idio-16 00:02:24.121 [34/37] Linking target samples/shadow_ioeventfd_server 00:02:24.121 [35/37] Linking target samples/null 00:02:24.121 [36/37] Linking target samples/server 00:02:24.121 [37/37] Linking target samples/lspci 00:02:24.121 INFO: autodetecting backend as ninja 00:02:24.121 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:02:24.121 DESTDIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user meson install --quiet -C /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:02:24.687 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug' 00:02:24.687 ninja: no work to do. 00:02:34.657 The Meson build system 00:02:34.657 Version: 1.5.0 00:02:34.657 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:34.657 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:34.657 Build type: native build 00:02:34.657 Program cat found: YES (/usr/bin/cat) 00:02:34.657 Project name: DPDK 00:02:34.657 Project version: 24.03.0 00:02:34.657 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:34.657 C linker for the host machine: cc ld.bfd 2.40-14 00:02:34.657 Host machine cpu family: x86_64 00:02:34.657 Host machine cpu: x86_64 00:02:34.657 Message: ## Building in Developer Mode ## 00:02:34.657 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:34.657 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:34.657 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:34.657 Program python3 found: YES (/usr/bin/python3) 00:02:34.657 Program cat found: YES (/usr/bin/cat) 00:02:34.657 Compiler for C supports arguments -march=native: YES 00:02:34.657 Checking for size of "void *" : 8 00:02:34.657 Checking for size of "void *" : 8 (cached) 00:02:34.657 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:34.657 Library m found: YES 00:02:34.657 Library numa found: YES 00:02:34.657 Has header "numaif.h" : YES 00:02:34.657 Library fdt found: NO 00:02:34.657 Library execinfo found: NO 00:02:34.657 Has header "execinfo.h" : YES 00:02:34.657 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:34.657 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:34.657 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:34.657 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:34.657 Run-time dependency openssl found: YES 3.1.1 00:02:34.657 Run-time dependency libpcap found: YES 1.10.4 00:02:34.657 Has header "pcap.h" with dependency libpcap: YES 00:02:34.657 Compiler for C supports arguments -Wcast-qual: YES 00:02:34.657 Compiler for C supports arguments -Wdeprecated: YES 00:02:34.657 Compiler for C supports arguments -Wformat: YES 00:02:34.657 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:34.657 Compiler for C supports arguments -Wformat-security: NO 00:02:34.657 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:34.657 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:34.657 Compiler for C supports arguments -Wnested-externs: YES 00:02:34.657 Compiler for C supports arguments -Wold-style-definition: YES 00:02:34.657 Compiler for C supports arguments -Wpointer-arith: YES 00:02:34.657 Compiler for C supports arguments -Wsign-compare: YES 00:02:34.657 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:34.657 Compiler for C supports arguments -Wundef: YES 00:02:34.657 Compiler for C supports arguments -Wwrite-strings: YES 00:02:34.657 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:34.657 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:34.657 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:34.657 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:34.657 Program objdump found: YES (/usr/bin/objdump) 00:02:34.657 Compiler for C supports arguments -mavx512f: YES 00:02:34.657 Checking if "AVX512 checking" compiles: YES 00:02:34.657 Fetching value of define "__SSE4_2__" : 1 00:02:34.657 Fetching value of define "__AES__" : 1 00:02:34.657 Fetching value of define "__AVX__" : 1 00:02:34.657 Fetching value of define "__AVX2__" : 1 00:02:34.657 Fetching value of define "__AVX512BW__" : (undefined) 00:02:34.657 Fetching value of define "__AVX512CD__" : (undefined) 00:02:34.657 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:34.657 Fetching value of define "__AVX512F__" : (undefined) 00:02:34.657 Fetching value of define "__AVX512VL__" : (undefined) 00:02:34.657 Fetching value of define "__PCLMUL__" : 1 00:02:34.657 Fetching value of define "__RDRND__" : 1 00:02:34.657 Fetching value of define "__RDSEED__" : 1 00:02:34.657 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:34.657 Fetching value of define "__znver1__" : (undefined) 00:02:34.657 Fetching value of define "__znver2__" : (undefined) 00:02:34.657 Fetching value of define "__znver3__" : (undefined) 00:02:34.657 Fetching value of define "__znver4__" : (undefined) 00:02:34.657 Library asan found: YES 00:02:34.657 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:34.657 Message: lib/log: Defining dependency "log" 00:02:34.657 Message: lib/kvargs: Defining dependency "kvargs" 00:02:34.657 Message: lib/telemetry: Defining dependency "telemetry" 00:02:34.657 Library rt found: YES 00:02:34.657 Checking for function "getentropy" : NO 00:02:34.657 Message: lib/eal: Defining dependency "eal" 00:02:34.657 Message: lib/ring: Defining dependency "ring" 00:02:34.657 Message: lib/rcu: Defining dependency "rcu" 00:02:34.657 Message: lib/mempool: Defining dependency "mempool" 00:02:34.657 Message: lib/mbuf: Defining dependency "mbuf" 00:02:34.657 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:34.657 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:34.657 Compiler for C supports arguments -mpclmul: YES 00:02:34.657 Compiler for C supports arguments -maes: YES 00:02:34.657 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:34.657 Compiler for C supports arguments -mavx512bw: YES 00:02:34.657 Compiler for C supports arguments -mavx512dq: YES 00:02:34.657 Compiler for C supports arguments -mavx512vl: YES 00:02:34.657 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:34.657 Compiler for C supports arguments -mavx2: YES 00:02:34.657 Compiler for C supports arguments -mavx: YES 00:02:34.657 Message: lib/net: Defining dependency "net" 00:02:34.657 Message: lib/meter: Defining dependency "meter" 00:02:34.657 Message: lib/ethdev: Defining dependency "ethdev" 00:02:34.657 Message: lib/pci: Defining dependency "pci" 00:02:34.657 Message: lib/cmdline: Defining dependency "cmdline" 00:02:34.657 Message: lib/hash: Defining dependency "hash" 00:02:34.657 Message: lib/timer: Defining dependency "timer" 00:02:34.657 Message: lib/compressdev: Defining dependency "compressdev" 00:02:34.657 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:34.657 Message: lib/dmadev: Defining dependency "dmadev" 00:02:34.657 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:34.657 Message: lib/power: Defining dependency "power" 00:02:34.657 Message: lib/reorder: Defining dependency "reorder" 00:02:34.657 Message: lib/security: Defining dependency "security" 00:02:34.657 Has header "linux/userfaultfd.h" : YES 00:02:34.657 Has header "linux/vduse.h" : YES 00:02:34.657 Message: lib/vhost: Defining dependency "vhost" 00:02:34.657 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:34.657 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:34.657 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:34.657 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:34.657 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:34.657 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:34.657 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:34.657 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:34.657 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:34.657 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:34.657 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:34.657 Configuring doxy-api-html.conf using configuration 00:02:34.657 Configuring doxy-api-man.conf using configuration 00:02:34.657 Program mandb found: YES (/usr/bin/mandb) 00:02:34.657 Program sphinx-build found: NO 00:02:34.657 Configuring rte_build_config.h using configuration 00:02:34.657 Message: 00:02:34.657 ================= 00:02:34.657 Applications Enabled 00:02:34.657 ================= 00:02:34.657 00:02:34.657 apps: 00:02:34.657 00:02:34.657 00:02:34.657 Message: 00:02:34.657 ================= 00:02:34.657 Libraries Enabled 00:02:34.657 ================= 00:02:34.657 00:02:34.657 libs: 00:02:34.657 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:34.657 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:34.657 cryptodev, dmadev, power, reorder, security, vhost, 00:02:34.657 00:02:34.657 Message: 00:02:34.657 =============== 00:02:34.657 Drivers Enabled 00:02:34.657 =============== 00:02:34.657 00:02:34.657 common: 00:02:34.657 00:02:34.657 bus: 00:02:34.657 pci, vdev, 00:02:34.657 mempool: 00:02:34.657 ring, 00:02:34.657 dma: 00:02:34.657 00:02:34.657 net: 00:02:34.657 00:02:34.657 crypto: 00:02:34.657 00:02:34.657 compress: 00:02:34.657 00:02:34.657 vdpa: 00:02:34.657 00:02:34.657 00:02:34.657 Message: 00:02:34.657 ================= 00:02:34.657 Content Skipped 00:02:34.657 ================= 00:02:34.657 00:02:34.657 apps: 00:02:34.657 dumpcap: explicitly disabled via build config 00:02:34.657 graph: explicitly disabled via build config 00:02:34.657 pdump: explicitly disabled via build config 00:02:34.657 proc-info: explicitly disabled via build config 00:02:34.657 test-acl: explicitly disabled via build config 00:02:34.657 test-bbdev: explicitly disabled via build config 00:02:34.657 test-cmdline: explicitly disabled via build config 00:02:34.657 test-compress-perf: explicitly disabled via build config 00:02:34.657 test-crypto-perf: explicitly disabled via build config 00:02:34.657 test-dma-perf: explicitly disabled via build config 00:02:34.657 test-eventdev: explicitly disabled via build config 00:02:34.658 test-fib: explicitly disabled via build config 00:02:34.658 test-flow-perf: explicitly disabled via build config 00:02:34.658 test-gpudev: explicitly disabled via build config 00:02:34.658 test-mldev: explicitly disabled via build config 00:02:34.658 test-pipeline: explicitly disabled via build config 00:02:34.658 test-pmd: explicitly disabled via build config 00:02:34.658 test-regex: explicitly disabled via build config 00:02:34.658 test-sad: explicitly disabled via build config 00:02:34.658 test-security-perf: explicitly disabled via build config 00:02:34.658 00:02:34.658 libs: 00:02:34.658 argparse: explicitly disabled via build config 00:02:34.658 metrics: explicitly disabled via build config 00:02:34.658 acl: explicitly disabled via build config 00:02:34.658 bbdev: explicitly disabled via build config 00:02:34.658 bitratestats: explicitly disabled via build config 00:02:34.658 bpf: explicitly disabled via build config 00:02:34.658 cfgfile: explicitly disabled via build config 00:02:34.658 distributor: explicitly disabled via build config 00:02:34.658 efd: explicitly disabled via build config 00:02:34.658 eventdev: explicitly disabled via build config 00:02:34.658 dispatcher: explicitly disabled via build config 00:02:34.658 gpudev: explicitly disabled via build config 00:02:34.658 gro: explicitly disabled via build config 00:02:34.658 gso: explicitly disabled via build config 00:02:34.658 ip_frag: explicitly disabled via build config 00:02:34.658 jobstats: explicitly disabled via build config 00:02:34.658 latencystats: explicitly disabled via build config 00:02:34.658 lpm: explicitly disabled via build config 00:02:34.658 member: explicitly disabled via build config 00:02:34.658 pcapng: explicitly disabled via build config 00:02:34.658 rawdev: explicitly disabled via build config 00:02:34.658 regexdev: explicitly disabled via build config 00:02:34.658 mldev: explicitly disabled via build config 00:02:34.658 rib: explicitly disabled via build config 00:02:34.658 sched: explicitly disabled via build config 00:02:34.658 stack: explicitly disabled via build config 00:02:34.658 ipsec: explicitly disabled via build config 00:02:34.658 pdcp: explicitly disabled via build config 00:02:34.658 fib: explicitly disabled via build config 00:02:34.658 port: explicitly disabled via build config 00:02:34.658 pdump: explicitly disabled via build config 00:02:34.658 table: explicitly disabled via build config 00:02:34.658 pipeline: explicitly disabled via build config 00:02:34.658 graph: explicitly disabled via build config 00:02:34.658 node: explicitly disabled via build config 00:02:34.658 00:02:34.658 drivers: 00:02:34.658 common/cpt: not in enabled drivers build config 00:02:34.658 common/dpaax: not in enabled drivers build config 00:02:34.658 common/iavf: not in enabled drivers build config 00:02:34.658 common/idpf: not in enabled drivers build config 00:02:34.658 common/ionic: not in enabled drivers build config 00:02:34.658 common/mvep: not in enabled drivers build config 00:02:34.658 common/octeontx: not in enabled drivers build config 00:02:34.658 bus/auxiliary: not in enabled drivers build config 00:02:34.658 bus/cdx: not in enabled drivers build config 00:02:34.658 bus/dpaa: not in enabled drivers build config 00:02:34.658 bus/fslmc: not in enabled drivers build config 00:02:34.658 bus/ifpga: not in enabled drivers build config 00:02:34.658 bus/platform: not in enabled drivers build config 00:02:34.658 bus/uacce: not in enabled drivers build config 00:02:34.658 bus/vmbus: not in enabled drivers build config 00:02:34.658 common/cnxk: not in enabled drivers build config 00:02:34.658 common/mlx5: not in enabled drivers build config 00:02:34.658 common/nfp: not in enabled drivers build config 00:02:34.658 common/nitrox: not in enabled drivers build config 00:02:34.658 common/qat: not in enabled drivers build config 00:02:34.658 common/sfc_efx: not in enabled drivers build config 00:02:34.658 mempool/bucket: not in enabled drivers build config 00:02:34.658 mempool/cnxk: not in enabled drivers build config 00:02:34.658 mempool/dpaa: not in enabled drivers build config 00:02:34.658 mempool/dpaa2: not in enabled drivers build config 00:02:34.658 mempool/octeontx: not in enabled drivers build config 00:02:34.658 mempool/stack: not in enabled drivers build config 00:02:34.658 dma/cnxk: not in enabled drivers build config 00:02:34.658 dma/dpaa: not in enabled drivers build config 00:02:34.658 dma/dpaa2: not in enabled drivers build config 00:02:34.658 dma/hisilicon: not in enabled drivers build config 00:02:34.658 dma/idxd: not in enabled drivers build config 00:02:34.658 dma/ioat: not in enabled drivers build config 00:02:34.658 dma/skeleton: not in enabled drivers build config 00:02:34.658 net/af_packet: not in enabled drivers build config 00:02:34.658 net/af_xdp: not in enabled drivers build config 00:02:34.658 net/ark: not in enabled drivers build config 00:02:34.658 net/atlantic: not in enabled drivers build config 00:02:34.658 net/avp: not in enabled drivers build config 00:02:34.658 net/axgbe: not in enabled drivers build config 00:02:34.658 net/bnx2x: not in enabled drivers build config 00:02:34.658 net/bnxt: not in enabled drivers build config 00:02:34.658 net/bonding: not in enabled drivers build config 00:02:34.658 net/cnxk: not in enabled drivers build config 00:02:34.658 net/cpfl: not in enabled drivers build config 00:02:34.658 net/cxgbe: not in enabled drivers build config 00:02:34.658 net/dpaa: not in enabled drivers build config 00:02:34.658 net/dpaa2: not in enabled drivers build config 00:02:34.658 net/e1000: not in enabled drivers build config 00:02:34.658 net/ena: not in enabled drivers build config 00:02:34.658 net/enetc: not in enabled drivers build config 00:02:34.658 net/enetfec: not in enabled drivers build config 00:02:34.658 net/enic: not in enabled drivers build config 00:02:34.658 net/failsafe: not in enabled drivers build config 00:02:34.658 net/fm10k: not in enabled drivers build config 00:02:34.658 net/gve: not in enabled drivers build config 00:02:34.658 net/hinic: not in enabled drivers build config 00:02:34.658 net/hns3: not in enabled drivers build config 00:02:34.658 net/i40e: not in enabled drivers build config 00:02:34.658 net/iavf: not in enabled drivers build config 00:02:34.658 net/ice: not in enabled drivers build config 00:02:34.658 net/idpf: not in enabled drivers build config 00:02:34.658 net/igc: not in enabled drivers build config 00:02:34.658 net/ionic: not in enabled drivers build config 00:02:34.658 net/ipn3ke: not in enabled drivers build config 00:02:34.658 net/ixgbe: not in enabled drivers build config 00:02:34.658 net/mana: not in enabled drivers build config 00:02:34.658 net/memif: not in enabled drivers build config 00:02:34.658 net/mlx4: not in enabled drivers build config 00:02:34.658 net/mlx5: not in enabled drivers build config 00:02:34.658 net/mvneta: not in enabled drivers build config 00:02:34.658 net/mvpp2: not in enabled drivers build config 00:02:34.658 net/netvsc: not in enabled drivers build config 00:02:34.658 net/nfb: not in enabled drivers build config 00:02:34.658 net/nfp: not in enabled drivers build config 00:02:34.658 net/ngbe: not in enabled drivers build config 00:02:34.658 net/null: not in enabled drivers build config 00:02:34.658 net/octeontx: not in enabled drivers build config 00:02:34.658 net/octeon_ep: not in enabled drivers build config 00:02:34.658 net/pcap: not in enabled drivers build config 00:02:34.658 net/pfe: not in enabled drivers build config 00:02:34.658 net/qede: not in enabled drivers build config 00:02:34.658 net/ring: not in enabled drivers build config 00:02:34.658 net/sfc: not in enabled drivers build config 00:02:34.658 net/softnic: not in enabled drivers build config 00:02:34.658 net/tap: not in enabled drivers build config 00:02:34.658 net/thunderx: not in enabled drivers build config 00:02:34.658 net/txgbe: not in enabled drivers build config 00:02:34.658 net/vdev_netvsc: not in enabled drivers build config 00:02:34.658 net/vhost: not in enabled drivers build config 00:02:34.658 net/virtio: not in enabled drivers build config 00:02:34.658 net/vmxnet3: not in enabled drivers build config 00:02:34.658 raw/*: missing internal dependency, "rawdev" 00:02:34.658 crypto/armv8: not in enabled drivers build config 00:02:34.658 crypto/bcmfs: not in enabled drivers build config 00:02:34.658 crypto/caam_jr: not in enabled drivers build config 00:02:34.658 crypto/ccp: not in enabled drivers build config 00:02:34.658 crypto/cnxk: not in enabled drivers build config 00:02:34.658 crypto/dpaa_sec: not in enabled drivers build config 00:02:34.658 crypto/dpaa2_sec: not in enabled drivers build config 00:02:34.658 crypto/ipsec_mb: not in enabled drivers build config 00:02:34.658 crypto/mlx5: not in enabled drivers build config 00:02:34.658 crypto/mvsam: not in enabled drivers build config 00:02:34.658 crypto/nitrox: not in enabled drivers build config 00:02:34.658 crypto/null: not in enabled drivers build config 00:02:34.658 crypto/octeontx: not in enabled drivers build config 00:02:34.658 crypto/openssl: not in enabled drivers build config 00:02:34.658 crypto/scheduler: not in enabled drivers build config 00:02:34.658 crypto/uadk: not in enabled drivers build config 00:02:34.658 crypto/virtio: not in enabled drivers build config 00:02:34.658 compress/isal: not in enabled drivers build config 00:02:34.658 compress/mlx5: not in enabled drivers build config 00:02:34.658 compress/nitrox: not in enabled drivers build config 00:02:34.658 compress/octeontx: not in enabled drivers build config 00:02:34.658 compress/zlib: not in enabled drivers build config 00:02:34.658 regex/*: missing internal dependency, "regexdev" 00:02:34.658 ml/*: missing internal dependency, "mldev" 00:02:34.658 vdpa/ifc: not in enabled drivers build config 00:02:34.658 vdpa/mlx5: not in enabled drivers build config 00:02:34.658 vdpa/nfp: not in enabled drivers build config 00:02:34.658 vdpa/sfc: not in enabled drivers build config 00:02:34.658 event/*: missing internal dependency, "eventdev" 00:02:34.658 baseband/*: missing internal dependency, "bbdev" 00:02:34.658 gpu/*: missing internal dependency, "gpudev" 00:02:34.658 00:02:34.658 00:02:34.658 Build targets in project: 85 00:02:34.658 00:02:34.658 DPDK 24.03.0 00:02:34.658 00:02:34.658 User defined options 00:02:34.658 buildtype : debug 00:02:34.658 default_library : shared 00:02:34.658 libdir : lib 00:02:34.658 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:34.658 b_sanitize : address 00:02:34.658 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:34.658 c_link_args : 00:02:34.658 cpu_instruction_set: native 00:02:34.659 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:34.659 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:34.659 enable_docs : false 00:02:34.659 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:34.659 enable_kmods : false 00:02:34.659 max_lcores : 128 00:02:34.659 tests : false 00:02:34.659 00:02:34.659 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:34.916 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:34.916 [1/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:34.916 [2/268] Linking static target lib/librte_kvargs.a 00:02:34.916 [3/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:34.916 [4/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:34.916 [5/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:34.916 [6/268] Linking static target lib/librte_log.a 00:02:35.483 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.483 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:35.742 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:35.742 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:35.742 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:35.742 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:35.742 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:35.742 [14/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:35.999 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:35.999 [16/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:35.999 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:35.999 [18/268] Linking static target lib/librte_telemetry.a 00:02:35.999 [19/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.256 [20/268] Linking target lib/librte_log.so.24.1 00:02:36.514 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:36.514 [22/268] Linking target lib/librte_kvargs.so.24.1 00:02:36.514 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:36.514 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:36.773 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:36.773 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:36.773 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:36.773 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:36.773 [29/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:36.773 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:37.031 [31/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.031 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:37.031 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:37.031 [34/268] Linking target lib/librte_telemetry.so.24.1 00:02:37.290 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:37.290 [36/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:37.290 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:37.857 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:37.857 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:37.857 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:37.857 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:37.857 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:37.857 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:37.857 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:37.857 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:37.857 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:38.115 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:38.115 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:38.373 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:38.373 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:38.632 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:38.632 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:38.632 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:38.891 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:38.891 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:38.891 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:38.891 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:38.891 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:39.149 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:39.149 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:39.407 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:39.407 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:39.666 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:39.666 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:39.666 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:39.666 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:39.924 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:40.182 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:40.182 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:40.182 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:40.182 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:40.441 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:40.441 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:40.441 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:40.441 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:40.441 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:40.441 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:40.700 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:40.700 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:40.958 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:40.958 [81/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:40.958 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:41.216 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:41.216 [84/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:41.216 [85/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:41.216 [86/268] Linking static target lib/librte_eal.a 00:02:41.474 [87/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:41.474 [88/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:41.474 [89/268] Linking static target lib/librte_ring.a 00:02:41.732 [90/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:41.732 [91/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:41.732 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:41.732 [93/268] Linking static target lib/librte_rcu.a 00:02:42.005 [94/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:42.005 [95/268] Linking static target lib/librte_mempool.a 00:02:42.005 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:42.005 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:42.005 [98/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:42.005 [99/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:42.005 [100/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.272 [101/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:42.273 [102/268] Linking static target lib/librte_mbuf.a 00:02:42.531 [103/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.531 [104/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:42.787 [105/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:42.787 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:42.787 [107/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:42.787 [108/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:42.787 [109/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:42.787 [110/268] Linking static target lib/librte_meter.a 00:02:42.787 [111/268] Linking static target lib/librte_net.a 00:02:43.354 [112/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.354 [113/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.354 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:43.354 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:43.354 [116/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.354 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:43.354 [118/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.354 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:43.922 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:44.180 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:44.180 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:44.439 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:44.697 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:44.697 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:44.697 [126/268] Linking static target lib/librte_pci.a 00:02:44.956 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:44.956 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:44.956 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:44.956 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:44.956 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:44.956 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:44.956 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:45.215 [134/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.215 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:45.215 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:45.215 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:45.215 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:45.215 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:45.215 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:45.215 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:45.215 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:45.473 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:45.473 [144/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:45.473 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:45.732 [146/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:45.732 [147/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:45.732 [148/268] Linking static target lib/librte_cmdline.a 00:02:45.990 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:46.249 [150/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:46.249 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:46.249 [152/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:46.249 [153/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:46.507 [154/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:46.507 [155/268] Linking static target lib/librte_timer.a 00:02:46.766 [156/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:47.024 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:47.024 [158/268] Linking static target lib/librte_compressdev.a 00:02:47.024 [159/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:47.024 [160/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:47.024 [161/268] Linking static target lib/librte_hash.a 00:02:47.024 [162/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:47.024 [163/268] Linking static target lib/librte_ethdev.a 00:02:47.024 [164/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:47.283 [165/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.283 [166/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.283 [167/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:47.541 [168/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:47.541 [169/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:47.541 [170/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:47.541 [171/268] Linking static target lib/librte_dmadev.a 00:02:48.108 [172/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.108 [173/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:48.108 [174/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:48.108 [175/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:48.367 [176/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:48.367 [177/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.367 [178/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.625 [179/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:48.625 [180/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:48.625 [181/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:48.625 [182/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:48.625 [183/268] Linking static target lib/librte_cryptodev.a 00:02:48.883 [184/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:48.883 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:48.883 [186/268] Linking static target lib/librte_power.a 00:02:49.141 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:49.141 [188/268] Linking static target lib/librte_reorder.a 00:02:49.399 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:49.399 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:49.399 [191/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:49.399 [192/268] Linking static target lib/librte_security.a 00:02:49.658 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:49.658 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.225 [195/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.225 [196/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:50.225 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.483 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:50.483 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:50.741 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:51.000 [201/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:51.000 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:51.259 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:51.259 [204/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.259 [205/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:51.259 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:51.517 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:51.775 [208/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:51.775 [209/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:51.775 [210/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:51.775 [211/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:52.033 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:52.033 [213/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:52.033 [214/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:52.033 [215/268] Linking static target drivers/librte_bus_vdev.a 00:02:52.033 [216/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:52.292 [217/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:52.292 [218/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:52.292 [219/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:52.292 [220/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:52.292 [221/268] Linking static target drivers/librte_bus_pci.a 00:02:52.292 [222/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.551 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:52.551 [224/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:52.551 [225/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:52.551 [226/268] Linking static target drivers/librte_mempool_ring.a 00:02:52.809 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.377 [228/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.377 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:53.377 [230/268] Linking target lib/librte_eal.so.24.1 00:02:53.377 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:53.637 [232/268] Linking target lib/librte_pci.so.24.1 00:02:53.637 [233/268] Linking target lib/librte_ring.so.24.1 00:02:53.637 [234/268] Linking target lib/librte_meter.so.24.1 00:02:53.637 [235/268] Linking target lib/librte_timer.so.24.1 00:02:53.637 [236/268] Linking target lib/librte_dmadev.so.24.1 00:02:53.637 [237/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:53.637 [238/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:53.637 [239/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:53.637 [240/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:53.637 [241/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:53.637 [242/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:53.637 [243/268] Linking target lib/librte_rcu.so.24.1 00:02:53.637 [244/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:53.637 [245/268] Linking target lib/librte_mempool.so.24.1 00:02:53.896 [246/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:53.896 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:53.896 [248/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:53.896 [249/268] Linking target lib/librte_mbuf.so.24.1 00:02:54.154 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:54.155 [251/268] Linking target lib/librte_net.so.24.1 00:02:54.155 [252/268] Linking target lib/librte_compressdev.so.24.1 00:02:54.155 [253/268] Linking target lib/librte_reorder.so.24.1 00:02:54.155 [254/268] Linking target lib/librte_cryptodev.so.24.1 00:02:54.155 [255/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:54.155 [256/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:54.155 [257/268] Linking target lib/librte_security.so.24.1 00:02:54.155 [258/268] Linking target lib/librte_cmdline.so.24.1 00:02:54.155 [259/268] Linking target lib/librte_hash.so.24.1 00:02:54.414 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:54.981 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.259 [262/268] Linking target lib/librte_ethdev.so.24.1 00:02:55.259 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:55.555 [264/268] Linking target lib/librte_power.so.24.1 00:02:58.088 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:58.088 [266/268] Linking static target lib/librte_vhost.a 00:02:59.466 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.466 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:59.466 INFO: autodetecting backend as ninja 00:02:59.466 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:21.400 CC lib/ut/ut.o 00:03:21.400 CC lib/ut_mock/mock.o 00:03:21.400 CC lib/log/log.o 00:03:21.400 CC lib/log/log_flags.o 00:03:21.400 CC lib/log/log_deprecated.o 00:03:21.400 LIB libspdk_ut_mock.a 00:03:21.400 LIB libspdk_ut.a 00:03:21.400 LIB libspdk_log.a 00:03:21.400 SO libspdk_ut_mock.so.6.0 00:03:21.400 SO libspdk_ut.so.2.0 00:03:21.400 SO libspdk_log.so.7.1 00:03:21.400 SYMLINK libspdk_ut_mock.so 00:03:21.400 SYMLINK libspdk_ut.so 00:03:21.400 SYMLINK libspdk_log.so 00:03:21.400 CC lib/ioat/ioat.o 00:03:21.400 CC lib/dma/dma.o 00:03:21.400 CXX lib/trace_parser/trace.o 00:03:21.400 CC lib/util/base64.o 00:03:21.400 CC lib/util/cpuset.o 00:03:21.400 CC lib/util/bit_array.o 00:03:21.400 CC lib/util/crc16.o 00:03:21.400 CC lib/util/crc32.o 00:03:21.400 CC lib/util/crc32c.o 00:03:21.400 CC lib/vfio_user/host/vfio_user_pci.o 00:03:21.400 CC lib/vfio_user/host/vfio_user.o 00:03:21.400 CC lib/util/crc32_ieee.o 00:03:21.400 CC lib/util/crc64.o 00:03:21.400 CC lib/util/dif.o 00:03:21.400 CC lib/util/fd.o 00:03:21.400 CC lib/util/fd_group.o 00:03:21.400 LIB libspdk_dma.a 00:03:21.400 SO libspdk_dma.so.5.0 00:03:21.400 CC lib/util/file.o 00:03:21.400 LIB libspdk_ioat.a 00:03:21.400 CC lib/util/hexlify.o 00:03:21.400 SO libspdk_ioat.so.7.0 00:03:21.400 SYMLINK libspdk_dma.so 00:03:21.400 CC lib/util/iov.o 00:03:21.400 CC lib/util/math.o 00:03:21.400 CC lib/util/net.o 00:03:21.400 LIB libspdk_vfio_user.a 00:03:21.400 SYMLINK libspdk_ioat.so 00:03:21.400 CC lib/util/pipe.o 00:03:21.400 SO libspdk_vfio_user.so.5.0 00:03:21.400 CC lib/util/strerror_tls.o 00:03:21.400 CC lib/util/string.o 00:03:21.400 SYMLINK libspdk_vfio_user.so 00:03:21.400 CC lib/util/uuid.o 00:03:21.400 CC lib/util/xor.o 00:03:21.400 CC lib/util/zipf.o 00:03:21.400 CC lib/util/md5.o 00:03:21.400 LIB libspdk_util.a 00:03:21.400 SO libspdk_util.so.10.1 00:03:21.400 LIB libspdk_trace_parser.a 00:03:21.400 SYMLINK libspdk_util.so 00:03:21.400 SO libspdk_trace_parser.so.6.0 00:03:21.400 SYMLINK libspdk_trace_parser.so 00:03:21.659 CC lib/env_dpdk/env.o 00:03:21.659 CC lib/rdma_utils/rdma_utils.o 00:03:21.659 CC lib/env_dpdk/pci.o 00:03:21.659 CC lib/env_dpdk/memory.o 00:03:21.659 CC lib/env_dpdk/init.o 00:03:21.659 CC lib/env_dpdk/threads.o 00:03:21.659 CC lib/idxd/idxd.o 00:03:21.659 CC lib/vmd/vmd.o 00:03:21.659 CC lib/json/json_parse.o 00:03:21.659 CC lib/conf/conf.o 00:03:21.659 CC lib/env_dpdk/pci_ioat.o 00:03:21.918 CC lib/json/json_util.o 00:03:21.918 LIB libspdk_conf.a 00:03:21.918 CC lib/json/json_write.o 00:03:21.918 LIB libspdk_rdma_utils.a 00:03:21.918 SO libspdk_conf.so.6.0 00:03:21.918 SO libspdk_rdma_utils.so.1.0 00:03:21.918 SYMLINK libspdk_conf.so 00:03:21.918 SYMLINK libspdk_rdma_utils.so 00:03:21.918 CC lib/idxd/idxd_user.o 00:03:21.918 CC lib/idxd/idxd_kernel.o 00:03:21.918 CC lib/env_dpdk/pci_virtio.o 00:03:21.918 CC lib/env_dpdk/pci_vmd.o 00:03:22.177 CC lib/env_dpdk/pci_idxd.o 00:03:22.177 CC lib/env_dpdk/pci_event.o 00:03:22.177 CC lib/vmd/led.o 00:03:22.177 LIB libspdk_json.a 00:03:22.177 SO libspdk_json.so.6.0 00:03:22.177 CC lib/env_dpdk/sigbus_handler.o 00:03:22.177 CC lib/env_dpdk/pci_dpdk.o 00:03:22.177 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:22.436 CC lib/rdma_provider/common.o 00:03:22.436 SYMLINK libspdk_json.so 00:03:22.436 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:22.436 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:22.436 LIB libspdk_idxd.a 00:03:22.436 SO libspdk_idxd.so.12.1 00:03:22.436 LIB libspdk_vmd.a 00:03:22.436 SO libspdk_vmd.so.6.0 00:03:22.436 SYMLINK libspdk_idxd.so 00:03:22.436 SYMLINK libspdk_vmd.so 00:03:22.436 LIB libspdk_rdma_provider.a 00:03:22.693 CC lib/jsonrpc/jsonrpc_server.o 00:03:22.693 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:22.693 SO libspdk_rdma_provider.so.7.0 00:03:22.693 CC lib/jsonrpc/jsonrpc_client.o 00:03:22.693 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:22.693 SYMLINK libspdk_rdma_provider.so 00:03:22.951 LIB libspdk_jsonrpc.a 00:03:22.951 SO libspdk_jsonrpc.so.6.0 00:03:22.951 SYMLINK libspdk_jsonrpc.so 00:03:23.210 CC lib/rpc/rpc.o 00:03:23.469 LIB libspdk_env_dpdk.a 00:03:23.469 SO libspdk_env_dpdk.so.15.1 00:03:23.469 LIB libspdk_rpc.a 00:03:23.728 SYMLINK libspdk_env_dpdk.so 00:03:23.728 SO libspdk_rpc.so.6.0 00:03:23.728 SYMLINK libspdk_rpc.so 00:03:23.987 CC lib/notify/notify.o 00:03:23.987 CC lib/notify/notify_rpc.o 00:03:23.987 CC lib/keyring/keyring.o 00:03:23.987 CC lib/keyring/keyring_rpc.o 00:03:23.987 CC lib/trace/trace_flags.o 00:03:23.987 CC lib/trace/trace.o 00:03:23.987 CC lib/trace/trace_rpc.o 00:03:23.987 LIB libspdk_notify.a 00:03:24.246 SO libspdk_notify.so.6.0 00:03:24.246 SYMLINK libspdk_notify.so 00:03:24.246 LIB libspdk_trace.a 00:03:24.246 LIB libspdk_keyring.a 00:03:24.246 SO libspdk_trace.so.11.0 00:03:24.246 SO libspdk_keyring.so.2.0 00:03:24.246 SYMLINK libspdk_trace.so 00:03:24.246 SYMLINK libspdk_keyring.so 00:03:24.505 CC lib/thread/thread.o 00:03:24.505 CC lib/thread/iobuf.o 00:03:24.505 CC lib/sock/sock.o 00:03:24.505 CC lib/sock/sock_rpc.o 00:03:25.073 LIB libspdk_sock.a 00:03:25.073 SO libspdk_sock.so.10.0 00:03:25.332 SYMLINK libspdk_sock.so 00:03:25.591 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:25.591 CC lib/nvme/nvme_fabric.o 00:03:25.591 CC lib/nvme/nvme_ctrlr.o 00:03:25.591 CC lib/nvme/nvme_ns_cmd.o 00:03:25.591 CC lib/nvme/nvme_qpair.o 00:03:25.591 CC lib/nvme/nvme_pcie_common.o 00:03:25.591 CC lib/nvme/nvme_ns.o 00:03:25.591 CC lib/nvme/nvme_pcie.o 00:03:25.591 CC lib/nvme/nvme.o 00:03:26.541 CC lib/nvme/nvme_quirks.o 00:03:26.541 CC lib/nvme/nvme_transport.o 00:03:26.541 CC lib/nvme/nvme_discovery.o 00:03:26.541 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:26.541 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:26.541 CC lib/nvme/nvme_tcp.o 00:03:26.541 LIB libspdk_thread.a 00:03:26.541 CC lib/nvme/nvme_opal.o 00:03:26.800 SO libspdk_thread.so.11.0 00:03:26.800 SYMLINK libspdk_thread.so 00:03:27.059 CC lib/accel/accel.o 00:03:27.059 CC lib/blob/blobstore.o 00:03:27.059 CC lib/nvme/nvme_io_msg.o 00:03:27.059 CC lib/nvme/nvme_poll_group.o 00:03:27.317 CC lib/nvme/nvme_zns.o 00:03:27.317 CC lib/nvme/nvme_stubs.o 00:03:27.317 CC lib/nvme/nvme_auth.o 00:03:27.576 CC lib/nvme/nvme_cuse.o 00:03:27.835 CC lib/blob/request.o 00:03:28.093 CC lib/init/json_config.o 00:03:28.093 CC lib/virtio/virtio.o 00:03:28.093 CC lib/vfu_tgt/tgt_endpoint.o 00:03:28.352 CC lib/virtio/virtio_vhost_user.o 00:03:28.352 CC lib/init/subsystem.o 00:03:28.352 CC lib/accel/accel_rpc.o 00:03:28.610 CC lib/init/subsystem_rpc.o 00:03:28.610 CC lib/init/rpc.o 00:03:28.610 CC lib/virtio/virtio_vfio_user.o 00:03:28.610 CC lib/vfu_tgt/tgt_rpc.o 00:03:28.610 CC lib/virtio/virtio_pci.o 00:03:28.610 CC lib/nvme/nvme_vfio_user.o 00:03:28.610 CC lib/nvme/nvme_rdma.o 00:03:28.610 CC lib/accel/accel_sw.o 00:03:28.610 LIB libspdk_init.a 00:03:28.868 CC lib/fsdev/fsdev.o 00:03:28.868 CC lib/blob/zeroes.o 00:03:28.868 LIB libspdk_vfu_tgt.a 00:03:28.868 SO libspdk_init.so.6.0 00:03:28.868 SO libspdk_vfu_tgt.so.3.0 00:03:28.868 SYMLINK libspdk_init.so 00:03:28.868 CC lib/fsdev/fsdev_io.o 00:03:28.868 CC lib/fsdev/fsdev_rpc.o 00:03:28.868 SYMLINK libspdk_vfu_tgt.so 00:03:28.868 CC lib/blob/blob_bs_dev.o 00:03:28.868 LIB libspdk_virtio.a 00:03:29.127 SO libspdk_virtio.so.7.0 00:03:29.127 CC lib/event/reactor.o 00:03:29.127 CC lib/event/app.o 00:03:29.127 LIB libspdk_accel.a 00:03:29.127 SYMLINK libspdk_virtio.so 00:03:29.127 CC lib/event/log_rpc.o 00:03:29.127 SO libspdk_accel.so.16.0 00:03:29.386 SYMLINK libspdk_accel.so 00:03:29.386 CC lib/event/app_rpc.o 00:03:29.386 CC lib/event/scheduler_static.o 00:03:29.386 CC lib/bdev/bdev_rpc.o 00:03:29.386 CC lib/bdev/bdev_zone.o 00:03:29.386 CC lib/bdev/bdev.o 00:03:29.386 CC lib/bdev/part.o 00:03:29.644 LIB libspdk_fsdev.a 00:03:29.644 CC lib/bdev/scsi_nvme.o 00:03:29.644 SO libspdk_fsdev.so.2.0 00:03:29.644 LIB libspdk_event.a 00:03:29.644 SYMLINK libspdk_fsdev.so 00:03:29.644 SO libspdk_event.so.14.0 00:03:29.903 SYMLINK libspdk_event.so 00:03:29.903 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:30.469 LIB libspdk_nvme.a 00:03:30.725 SO libspdk_nvme.so.15.0 00:03:30.725 LIB libspdk_fuse_dispatcher.a 00:03:30.725 SO libspdk_fuse_dispatcher.so.1.0 00:03:30.983 SYMLINK libspdk_fuse_dispatcher.so 00:03:30.983 SYMLINK libspdk_nvme.so 00:03:31.550 LIB libspdk_blob.a 00:03:31.835 SO libspdk_blob.so.11.0 00:03:31.835 SYMLINK libspdk_blob.so 00:03:32.116 CC lib/lvol/lvol.o 00:03:32.116 CC lib/blobfs/blobfs.o 00:03:32.116 CC lib/blobfs/tree.o 00:03:33.493 LIB libspdk_bdev.a 00:03:33.493 LIB libspdk_blobfs.a 00:03:33.493 SO libspdk_bdev.so.17.0 00:03:33.493 SO libspdk_blobfs.so.10.0 00:03:33.493 SYMLINK libspdk_blobfs.so 00:03:33.493 SYMLINK libspdk_bdev.so 00:03:33.493 LIB libspdk_lvol.a 00:03:33.493 SO libspdk_lvol.so.10.0 00:03:33.493 SYMLINK libspdk_lvol.so 00:03:33.751 CC lib/ftl/ftl_core.o 00:03:33.751 CC lib/ftl/ftl_layout.o 00:03:33.751 CC lib/ftl/ftl_init.o 00:03:33.751 CC lib/ublk/ublk.o 00:03:33.751 CC lib/ftl/ftl_debug.o 00:03:33.751 CC lib/ublk/ublk_rpc.o 00:03:33.751 CC lib/ftl/ftl_io.o 00:03:33.751 CC lib/nvmf/ctrlr.o 00:03:33.751 CC lib/scsi/dev.o 00:03:33.751 CC lib/nbd/nbd.o 00:03:33.751 CC lib/nbd/nbd_rpc.o 00:03:34.009 CC lib/nvmf/ctrlr_discovery.o 00:03:34.009 CC lib/scsi/lun.o 00:03:34.009 CC lib/ftl/ftl_sb.o 00:03:34.009 CC lib/ftl/ftl_l2p.o 00:03:34.009 CC lib/ftl/ftl_l2p_flat.o 00:03:34.009 CC lib/nvmf/ctrlr_bdev.o 00:03:34.009 CC lib/nvmf/subsystem.o 00:03:34.267 LIB libspdk_nbd.a 00:03:34.267 SO libspdk_nbd.so.7.0 00:03:34.267 CC lib/nvmf/nvmf.o 00:03:34.267 CC lib/nvmf/nvmf_rpc.o 00:03:34.267 SYMLINK libspdk_nbd.so 00:03:34.267 CC lib/nvmf/transport.o 00:03:34.267 CC lib/ftl/ftl_nv_cache.o 00:03:34.267 CC lib/scsi/port.o 00:03:34.526 LIB libspdk_ublk.a 00:03:34.526 SO libspdk_ublk.so.3.0 00:03:34.526 CC lib/scsi/scsi.o 00:03:34.526 SYMLINK libspdk_ublk.so 00:03:34.526 CC lib/nvmf/tcp.o 00:03:34.526 CC lib/nvmf/stubs.o 00:03:34.784 CC lib/scsi/scsi_bdev.o 00:03:35.043 CC lib/nvmf/mdns_server.o 00:03:35.302 CC lib/nvmf/vfio_user.o 00:03:35.302 CC lib/scsi/scsi_pr.o 00:03:35.560 CC lib/nvmf/rdma.o 00:03:35.560 CC lib/ftl/ftl_band.o 00:03:35.560 CC lib/nvmf/auth.o 00:03:35.560 CC lib/scsi/scsi_rpc.o 00:03:35.560 CC lib/scsi/task.o 00:03:35.819 CC lib/ftl/ftl_band_ops.o 00:03:35.819 CC lib/ftl/ftl_writer.o 00:03:35.819 CC lib/ftl/ftl_rq.o 00:03:35.819 LIB libspdk_scsi.a 00:03:36.078 CC lib/ftl/ftl_reloc.o 00:03:36.078 SO libspdk_scsi.so.9.0 00:03:36.078 CC lib/ftl/ftl_l2p_cache.o 00:03:36.078 SYMLINK libspdk_scsi.so 00:03:36.078 CC lib/ftl/ftl_p2l.o 00:03:36.078 CC lib/ftl/ftl_p2l_log.o 00:03:36.337 CC lib/ftl/mngt/ftl_mngt.o 00:03:36.337 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:36.597 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:36.597 CC lib/iscsi/conn.o 00:03:36.597 CC lib/iscsi/init_grp.o 00:03:36.597 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:36.858 CC lib/vhost/vhost.o 00:03:36.858 CC lib/vhost/vhost_rpc.o 00:03:36.858 CC lib/vhost/vhost_scsi.o 00:03:36.858 CC lib/iscsi/iscsi.o 00:03:36.858 CC lib/iscsi/param.o 00:03:36.858 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:37.116 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:37.374 CC lib/iscsi/portal_grp.o 00:03:37.374 CC lib/iscsi/tgt_node.o 00:03:37.633 CC lib/iscsi/iscsi_subsystem.o 00:03:37.633 CC lib/vhost/vhost_blk.o 00:03:37.633 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:37.633 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:37.633 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:37.891 CC lib/vhost/rte_vhost_user.o 00:03:37.891 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:37.891 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:37.891 CC lib/iscsi/iscsi_rpc.o 00:03:37.891 CC lib/iscsi/task.o 00:03:38.149 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:38.149 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:38.149 CC lib/ftl/utils/ftl_conf.o 00:03:38.149 CC lib/ftl/utils/ftl_md.o 00:03:38.149 CC lib/ftl/utils/ftl_mempool.o 00:03:38.407 CC lib/ftl/utils/ftl_bitmap.o 00:03:38.407 CC lib/ftl/utils/ftl_property.o 00:03:38.407 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:38.407 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:38.666 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:38.666 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:38.666 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:38.666 LIB libspdk_nvmf.a 00:03:38.666 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:38.666 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:38.666 LIB libspdk_iscsi.a 00:03:38.666 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:38.666 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:38.666 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:38.925 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:38.925 SO libspdk_iscsi.so.8.0 00:03:38.925 SO libspdk_nvmf.so.20.0 00:03:38.925 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:38.925 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:38.925 CC lib/ftl/base/ftl_base_dev.o 00:03:38.925 CC lib/ftl/base/ftl_base_bdev.o 00:03:38.925 CC lib/ftl/ftl_trace.o 00:03:39.183 SYMLINK libspdk_iscsi.so 00:03:39.183 LIB libspdk_vhost.a 00:03:39.183 SO libspdk_vhost.so.8.0 00:03:39.183 SYMLINK libspdk_nvmf.so 00:03:39.183 SYMLINK libspdk_vhost.so 00:03:39.442 LIB libspdk_ftl.a 00:03:39.701 SO libspdk_ftl.so.9.0 00:03:39.959 SYMLINK libspdk_ftl.so 00:03:40.218 CC module/vfu_device/vfu_virtio.o 00:03:40.218 CC module/env_dpdk/env_dpdk_rpc.o 00:03:40.477 CC module/accel/error/accel_error.o 00:03:40.477 CC module/accel/dsa/accel_dsa.o 00:03:40.477 CC module/blob/bdev/blob_bdev.o 00:03:40.477 CC module/sock/posix/posix.o 00:03:40.477 CC module/fsdev/aio/fsdev_aio.o 00:03:40.477 CC module/accel/ioat/accel_ioat.o 00:03:40.477 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:40.477 CC module/keyring/file/keyring.o 00:03:40.477 LIB libspdk_env_dpdk_rpc.a 00:03:40.477 SO libspdk_env_dpdk_rpc.so.6.0 00:03:40.735 SYMLINK libspdk_env_dpdk_rpc.so 00:03:40.735 CC module/keyring/file/keyring_rpc.o 00:03:40.735 CC module/accel/error/accel_error_rpc.o 00:03:40.735 CC module/accel/ioat/accel_ioat_rpc.o 00:03:40.735 LIB libspdk_scheduler_dynamic.a 00:03:40.735 SO libspdk_scheduler_dynamic.so.4.0 00:03:40.735 LIB libspdk_blob_bdev.a 00:03:40.735 LIB libspdk_keyring_file.a 00:03:40.735 CC module/accel/dsa/accel_dsa_rpc.o 00:03:40.735 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:40.735 SYMLINK libspdk_scheduler_dynamic.so 00:03:40.735 SO libspdk_blob_bdev.so.11.0 00:03:40.735 SO libspdk_keyring_file.so.2.0 00:03:40.735 LIB libspdk_accel_ioat.a 00:03:40.735 LIB libspdk_accel_error.a 00:03:40.993 SO libspdk_accel_ioat.so.6.0 00:03:40.993 SO libspdk_accel_error.so.2.0 00:03:40.993 SYMLINK libspdk_blob_bdev.so 00:03:40.993 SYMLINK libspdk_keyring_file.so 00:03:40.993 CC module/vfu_device/vfu_virtio_blk.o 00:03:40.993 SYMLINK libspdk_accel_ioat.so 00:03:40.993 SYMLINK libspdk_accel_error.so 00:03:40.993 CC module/vfu_device/vfu_virtio_scsi.o 00:03:40.993 CC module/vfu_device/vfu_virtio_rpc.o 00:03:40.993 LIB libspdk_accel_dsa.a 00:03:40.993 LIB libspdk_scheduler_dpdk_governor.a 00:03:40.993 CC module/scheduler/gscheduler/gscheduler.o 00:03:40.993 SO libspdk_accel_dsa.so.5.0 00:03:40.993 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:40.993 SYMLINK libspdk_accel_dsa.so 00:03:40.993 CC module/keyring/linux/keyring.o 00:03:41.252 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:41.252 CC module/keyring/linux/keyring_rpc.o 00:03:41.252 LIB libspdk_scheduler_gscheduler.a 00:03:41.252 SO libspdk_scheduler_gscheduler.so.4.0 00:03:41.252 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:41.252 CC module/fsdev/aio/linux_aio_mgr.o 00:03:41.252 LIB libspdk_keyring_linux.a 00:03:41.252 CC module/accel/iaa/accel_iaa.o 00:03:41.252 SYMLINK libspdk_scheduler_gscheduler.so 00:03:41.252 SO libspdk_keyring_linux.so.1.0 00:03:41.510 CC module/accel/iaa/accel_iaa_rpc.o 00:03:41.510 CC module/sock/uring/uring.o 00:03:41.510 CC module/vfu_device/vfu_virtio_fs.o 00:03:41.510 SYMLINK libspdk_keyring_linux.so 00:03:41.510 LIB libspdk_sock_posix.a 00:03:41.510 SO libspdk_sock_posix.so.6.0 00:03:41.510 LIB libspdk_fsdev_aio.a 00:03:41.510 CC module/bdev/delay/vbdev_delay.o 00:03:41.510 SYMLINK libspdk_sock_posix.so 00:03:41.510 LIB libspdk_accel_iaa.a 00:03:41.510 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:41.510 SO libspdk_fsdev_aio.so.1.0 00:03:41.510 CC module/bdev/error/vbdev_error.o 00:03:41.510 CC module/bdev/gpt/gpt.o 00:03:41.510 CC module/blobfs/bdev/blobfs_bdev.o 00:03:41.510 SO libspdk_accel_iaa.so.3.0 00:03:41.769 SYMLINK libspdk_fsdev_aio.so 00:03:41.769 SYMLINK libspdk_accel_iaa.so 00:03:41.769 CC module/bdev/error/vbdev_error_rpc.o 00:03:41.769 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:41.769 LIB libspdk_vfu_device.a 00:03:41.769 CC module/bdev/lvol/vbdev_lvol.o 00:03:41.769 SO libspdk_vfu_device.so.3.0 00:03:41.769 CC module/bdev/gpt/vbdev_gpt.o 00:03:41.769 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:41.769 SYMLINK libspdk_vfu_device.so 00:03:42.027 LIB libspdk_blobfs_bdev.a 00:03:42.027 LIB libspdk_bdev_error.a 00:03:42.027 SO libspdk_blobfs_bdev.so.6.0 00:03:42.027 SO libspdk_bdev_error.so.6.0 00:03:42.027 SYMLINK libspdk_blobfs_bdev.so 00:03:42.027 CC module/bdev/null/bdev_null.o 00:03:42.027 CC module/bdev/malloc/bdev_malloc.o 00:03:42.027 CC module/bdev/nvme/bdev_nvme.o 00:03:42.027 LIB libspdk_bdev_delay.a 00:03:42.027 SYMLINK libspdk_bdev_error.so 00:03:42.027 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:42.027 SO libspdk_bdev_delay.so.6.0 00:03:42.285 LIB libspdk_bdev_gpt.a 00:03:42.285 SYMLINK libspdk_bdev_delay.so 00:03:42.285 CC module/bdev/nvme/nvme_rpc.o 00:03:42.285 SO libspdk_bdev_gpt.so.6.0 00:03:42.285 CC module/bdev/passthru/vbdev_passthru.o 00:03:42.285 SYMLINK libspdk_bdev_gpt.so 00:03:42.285 CC module/bdev/nvme/bdev_mdns_client.o 00:03:42.285 LIB libspdk_bdev_lvol.a 00:03:42.543 LIB libspdk_sock_uring.a 00:03:42.543 SO libspdk_bdev_lvol.so.6.0 00:03:42.543 SO libspdk_sock_uring.so.5.0 00:03:42.543 CC module/bdev/null/bdev_null_rpc.o 00:03:42.543 CC module/bdev/raid/bdev_raid.o 00:03:42.543 CC module/bdev/nvme/vbdev_opal.o 00:03:42.543 SYMLINK libspdk_bdev_lvol.so 00:03:42.543 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:42.543 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:42.543 SYMLINK libspdk_sock_uring.so 00:03:42.543 CC module/bdev/raid/bdev_raid_rpc.o 00:03:42.543 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:42.543 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:42.801 LIB libspdk_bdev_null.a 00:03:42.801 SO libspdk_bdev_null.so.6.0 00:03:42.801 LIB libspdk_bdev_malloc.a 00:03:42.801 SYMLINK libspdk_bdev_null.so 00:03:42.801 SO libspdk_bdev_malloc.so.6.0 00:03:42.801 CC module/bdev/raid/bdev_raid_sb.o 00:03:42.801 CC module/bdev/raid/raid0.o 00:03:42.801 LIB libspdk_bdev_passthru.a 00:03:42.801 SYMLINK libspdk_bdev_malloc.so 00:03:42.801 SO libspdk_bdev_passthru.so.6.0 00:03:42.801 CC module/bdev/split/vbdev_split.o 00:03:43.059 CC module/bdev/raid/raid1.o 00:03:43.059 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:43.059 SYMLINK libspdk_bdev_passthru.so 00:03:43.059 CC module/bdev/uring/bdev_uring.o 00:03:43.059 CC module/bdev/aio/bdev_aio.o 00:03:43.059 CC module/bdev/ftl/bdev_ftl.o 00:03:43.059 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:43.059 CC module/bdev/split/vbdev_split_rpc.o 00:03:43.317 CC module/bdev/raid/concat.o 00:03:43.317 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:43.317 LIB libspdk_bdev_split.a 00:03:43.317 SO libspdk_bdev_split.so.6.0 00:03:43.317 CC module/bdev/uring/bdev_uring_rpc.o 00:03:43.576 SYMLINK libspdk_bdev_split.so 00:03:43.576 CC module/bdev/aio/bdev_aio_rpc.o 00:03:43.576 LIB libspdk_bdev_zone_block.a 00:03:43.576 LIB libspdk_bdev_ftl.a 00:03:43.576 SO libspdk_bdev_zone_block.so.6.0 00:03:43.576 SO libspdk_bdev_ftl.so.6.0 00:03:43.576 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:43.576 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:43.576 CC module/bdev/iscsi/bdev_iscsi.o 00:03:43.576 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:43.576 SYMLINK libspdk_bdev_ftl.so 00:03:43.576 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:43.576 SYMLINK libspdk_bdev_zone_block.so 00:03:43.576 LIB libspdk_bdev_uring.a 00:03:43.576 LIB libspdk_bdev_aio.a 00:03:43.576 SO libspdk_bdev_uring.so.6.0 00:03:43.576 SO libspdk_bdev_aio.so.6.0 00:03:43.835 SYMLINK libspdk_bdev_aio.so 00:03:43.835 SYMLINK libspdk_bdev_uring.so 00:03:43.835 LIB libspdk_bdev_raid.a 00:03:44.094 SO libspdk_bdev_raid.so.6.0 00:03:44.094 LIB libspdk_bdev_iscsi.a 00:03:44.094 SYMLINK libspdk_bdev_raid.so 00:03:44.094 SO libspdk_bdev_iscsi.so.6.0 00:03:44.094 SYMLINK libspdk_bdev_iscsi.so 00:03:44.352 LIB libspdk_bdev_virtio.a 00:03:44.352 SO libspdk_bdev_virtio.so.6.0 00:03:44.352 SYMLINK libspdk_bdev_virtio.so 00:03:45.753 LIB libspdk_bdev_nvme.a 00:03:45.753 SO libspdk_bdev_nvme.so.7.1 00:03:45.753 SYMLINK libspdk_bdev_nvme.so 00:03:46.320 CC module/event/subsystems/fsdev/fsdev.o 00:03:46.320 CC module/event/subsystems/sock/sock.o 00:03:46.320 CC module/event/subsystems/scheduler/scheduler.o 00:03:46.320 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:46.320 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:46.320 CC module/event/subsystems/iobuf/iobuf.o 00:03:46.320 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:46.320 CC module/event/subsystems/vmd/vmd.o 00:03:46.320 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:46.320 CC module/event/subsystems/keyring/keyring.o 00:03:46.320 LIB libspdk_event_fsdev.a 00:03:46.320 LIB libspdk_event_scheduler.a 00:03:46.320 LIB libspdk_event_vhost_blk.a 00:03:46.320 SO libspdk_event_fsdev.so.1.0 00:03:46.320 SO libspdk_event_vhost_blk.so.3.0 00:03:46.320 SO libspdk_event_scheduler.so.4.0 00:03:46.320 LIB libspdk_event_vfu_tgt.a 00:03:46.320 LIB libspdk_event_sock.a 00:03:46.320 LIB libspdk_event_vmd.a 00:03:46.320 LIB libspdk_event_keyring.a 00:03:46.320 SO libspdk_event_vfu_tgt.so.3.0 00:03:46.320 LIB libspdk_event_iobuf.a 00:03:46.320 SO libspdk_event_sock.so.5.0 00:03:46.320 SO libspdk_event_vmd.so.6.0 00:03:46.320 SO libspdk_event_keyring.so.1.0 00:03:46.320 SYMLINK libspdk_event_fsdev.so 00:03:46.320 SYMLINK libspdk_event_scheduler.so 00:03:46.320 SYMLINK libspdk_event_vhost_blk.so 00:03:46.320 SO libspdk_event_iobuf.so.3.0 00:03:46.320 SYMLINK libspdk_event_sock.so 00:03:46.320 SYMLINK libspdk_event_vfu_tgt.so 00:03:46.320 SYMLINK libspdk_event_vmd.so 00:03:46.320 SYMLINK libspdk_event_keyring.so 00:03:46.320 SYMLINK libspdk_event_iobuf.so 00:03:46.888 CC module/event/subsystems/accel/accel.o 00:03:46.888 LIB libspdk_event_accel.a 00:03:46.888 SO libspdk_event_accel.so.6.0 00:03:46.888 SYMLINK libspdk_event_accel.so 00:03:47.147 CC module/event/subsystems/bdev/bdev.o 00:03:47.406 LIB libspdk_event_bdev.a 00:03:47.406 SO libspdk_event_bdev.so.6.0 00:03:47.406 SYMLINK libspdk_event_bdev.so 00:03:47.664 CC module/event/subsystems/scsi/scsi.o 00:03:47.664 CC module/event/subsystems/nbd/nbd.o 00:03:47.664 CC module/event/subsystems/ublk/ublk.o 00:03:47.664 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:47.664 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:47.924 LIB libspdk_event_ublk.a 00:03:47.924 LIB libspdk_event_scsi.a 00:03:47.924 LIB libspdk_event_nbd.a 00:03:47.924 SO libspdk_event_scsi.so.6.0 00:03:47.924 SO libspdk_event_ublk.so.3.0 00:03:47.924 SO libspdk_event_nbd.so.6.0 00:03:47.924 SYMLINK libspdk_event_scsi.so 00:03:47.924 SYMLINK libspdk_event_ublk.so 00:03:47.924 SYMLINK libspdk_event_nbd.so 00:03:48.183 LIB libspdk_event_nvmf.a 00:03:48.183 SO libspdk_event_nvmf.so.6.0 00:03:48.183 SYMLINK libspdk_event_nvmf.so 00:03:48.183 CC module/event/subsystems/iscsi/iscsi.o 00:03:48.183 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:48.441 LIB libspdk_event_iscsi.a 00:03:48.441 LIB libspdk_event_vhost_scsi.a 00:03:48.441 SO libspdk_event_iscsi.so.6.0 00:03:48.441 SO libspdk_event_vhost_scsi.so.3.0 00:03:48.441 SYMLINK libspdk_event_iscsi.so 00:03:48.700 SYMLINK libspdk_event_vhost_scsi.so 00:03:48.700 SO libspdk.so.6.0 00:03:48.700 SYMLINK libspdk.so 00:03:48.957 CC app/trace_record/trace_record.o 00:03:48.957 TEST_HEADER include/spdk/accel.h 00:03:48.957 CXX app/trace/trace.o 00:03:48.957 TEST_HEADER include/spdk/accel_module.h 00:03:48.957 TEST_HEADER include/spdk/assert.h 00:03:48.957 TEST_HEADER include/spdk/barrier.h 00:03:48.957 CC test/rpc_client/rpc_client_test.o 00:03:48.957 TEST_HEADER include/spdk/base64.h 00:03:48.957 TEST_HEADER include/spdk/bdev.h 00:03:48.957 TEST_HEADER include/spdk/bdev_module.h 00:03:48.957 TEST_HEADER include/spdk/bdev_zone.h 00:03:48.957 TEST_HEADER include/spdk/bit_array.h 00:03:48.957 TEST_HEADER include/spdk/bit_pool.h 00:03:48.957 TEST_HEADER include/spdk/blob_bdev.h 00:03:48.957 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:48.957 TEST_HEADER include/spdk/blobfs.h 00:03:48.957 TEST_HEADER include/spdk/blob.h 00:03:48.957 TEST_HEADER include/spdk/conf.h 00:03:48.957 TEST_HEADER include/spdk/config.h 00:03:48.957 TEST_HEADER include/spdk/cpuset.h 00:03:48.957 TEST_HEADER include/spdk/crc16.h 00:03:48.957 TEST_HEADER include/spdk/crc32.h 00:03:48.957 TEST_HEADER include/spdk/crc64.h 00:03:48.957 TEST_HEADER include/spdk/dif.h 00:03:48.957 TEST_HEADER include/spdk/dma.h 00:03:48.957 TEST_HEADER include/spdk/endian.h 00:03:48.957 TEST_HEADER include/spdk/env_dpdk.h 00:03:48.957 TEST_HEADER include/spdk/env.h 00:03:48.957 TEST_HEADER include/spdk/event.h 00:03:48.957 TEST_HEADER include/spdk/fd_group.h 00:03:48.957 TEST_HEADER include/spdk/fd.h 00:03:48.957 CC app/nvmf_tgt/nvmf_main.o 00:03:48.957 TEST_HEADER include/spdk/file.h 00:03:48.957 TEST_HEADER include/spdk/fsdev.h 00:03:48.957 TEST_HEADER include/spdk/fsdev_module.h 00:03:48.957 TEST_HEADER include/spdk/ftl.h 00:03:48.957 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:48.957 TEST_HEADER include/spdk/gpt_spec.h 00:03:48.957 TEST_HEADER include/spdk/hexlify.h 00:03:48.957 TEST_HEADER include/spdk/histogram_data.h 00:03:48.957 TEST_HEADER include/spdk/idxd.h 00:03:48.957 TEST_HEADER include/spdk/idxd_spec.h 00:03:48.957 TEST_HEADER include/spdk/init.h 00:03:48.957 TEST_HEADER include/spdk/ioat.h 00:03:48.957 CC test/thread/poller_perf/poller_perf.o 00:03:49.215 TEST_HEADER include/spdk/ioat_spec.h 00:03:49.215 TEST_HEADER include/spdk/iscsi_spec.h 00:03:49.215 TEST_HEADER include/spdk/json.h 00:03:49.215 TEST_HEADER include/spdk/jsonrpc.h 00:03:49.215 TEST_HEADER include/spdk/keyring.h 00:03:49.215 TEST_HEADER include/spdk/keyring_module.h 00:03:49.215 TEST_HEADER include/spdk/likely.h 00:03:49.215 TEST_HEADER include/spdk/log.h 00:03:49.215 TEST_HEADER include/spdk/lvol.h 00:03:49.215 TEST_HEADER include/spdk/md5.h 00:03:49.215 TEST_HEADER include/spdk/memory.h 00:03:49.215 TEST_HEADER include/spdk/mmio.h 00:03:49.215 TEST_HEADER include/spdk/nbd.h 00:03:49.215 TEST_HEADER include/spdk/net.h 00:03:49.215 TEST_HEADER include/spdk/notify.h 00:03:49.215 CC examples/util/zipf/zipf.o 00:03:49.215 TEST_HEADER include/spdk/nvme.h 00:03:49.215 TEST_HEADER include/spdk/nvme_intel.h 00:03:49.215 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:49.215 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:49.215 TEST_HEADER include/spdk/nvme_spec.h 00:03:49.215 TEST_HEADER include/spdk/nvme_zns.h 00:03:49.215 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:49.215 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:49.215 TEST_HEADER include/spdk/nvmf.h 00:03:49.215 TEST_HEADER include/spdk/nvmf_spec.h 00:03:49.215 CC test/dma/test_dma/test_dma.o 00:03:49.215 TEST_HEADER include/spdk/nvmf_transport.h 00:03:49.215 TEST_HEADER include/spdk/opal.h 00:03:49.215 CC test/app/bdev_svc/bdev_svc.o 00:03:49.215 TEST_HEADER include/spdk/opal_spec.h 00:03:49.215 TEST_HEADER include/spdk/pci_ids.h 00:03:49.215 TEST_HEADER include/spdk/pipe.h 00:03:49.215 TEST_HEADER include/spdk/queue.h 00:03:49.215 TEST_HEADER include/spdk/reduce.h 00:03:49.215 TEST_HEADER include/spdk/rpc.h 00:03:49.215 TEST_HEADER include/spdk/scheduler.h 00:03:49.215 TEST_HEADER include/spdk/scsi.h 00:03:49.215 TEST_HEADER include/spdk/scsi_spec.h 00:03:49.215 TEST_HEADER include/spdk/sock.h 00:03:49.215 CC test/env/mem_callbacks/mem_callbacks.o 00:03:49.215 TEST_HEADER include/spdk/stdinc.h 00:03:49.215 TEST_HEADER include/spdk/string.h 00:03:49.215 TEST_HEADER include/spdk/thread.h 00:03:49.215 TEST_HEADER include/spdk/trace.h 00:03:49.215 TEST_HEADER include/spdk/trace_parser.h 00:03:49.215 TEST_HEADER include/spdk/tree.h 00:03:49.215 TEST_HEADER include/spdk/ublk.h 00:03:49.215 TEST_HEADER include/spdk/util.h 00:03:49.215 TEST_HEADER include/spdk/uuid.h 00:03:49.215 TEST_HEADER include/spdk/version.h 00:03:49.216 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:49.216 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:49.216 TEST_HEADER include/spdk/vhost.h 00:03:49.216 TEST_HEADER include/spdk/vmd.h 00:03:49.216 TEST_HEADER include/spdk/xor.h 00:03:49.216 TEST_HEADER include/spdk/zipf.h 00:03:49.216 CXX test/cpp_headers/accel.o 00:03:49.216 LINK rpc_client_test 00:03:49.216 LINK poller_perf 00:03:49.216 LINK nvmf_tgt 00:03:49.473 LINK spdk_trace_record 00:03:49.473 LINK zipf 00:03:49.473 LINK bdev_svc 00:03:49.473 CXX test/cpp_headers/accel_module.o 00:03:49.473 LINK spdk_trace 00:03:49.473 CC test/app/histogram_perf/histogram_perf.o 00:03:49.731 CC test/app/jsoncat/jsoncat.o 00:03:49.731 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:49.731 LINK histogram_perf 00:03:49.731 CC examples/ioat/perf/perf.o 00:03:49.731 CXX test/cpp_headers/assert.o 00:03:49.731 CC examples/vmd/lsvmd/lsvmd.o 00:03:49.731 LINK jsoncat 00:03:49.731 LINK test_dma 00:03:49.731 CC examples/idxd/perf/perf.o 00:03:49.989 CC app/iscsi_tgt/iscsi_tgt.o 00:03:49.989 LINK mem_callbacks 00:03:49.989 LINK lsvmd 00:03:49.989 CXX test/cpp_headers/barrier.o 00:03:49.989 CC examples/ioat/verify/verify.o 00:03:49.989 LINK ioat_perf 00:03:49.989 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:49.989 LINK iscsi_tgt 00:03:50.248 CC test/env/vtophys/vtophys.o 00:03:50.248 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:50.248 CXX test/cpp_headers/base64.o 00:03:50.248 LINK nvme_fuzz 00:03:50.248 CC examples/vmd/led/led.o 00:03:50.248 CXX test/cpp_headers/bdev.o 00:03:50.248 LINK idxd_perf 00:03:50.248 LINK vtophys 00:03:50.248 LINK verify 00:03:50.248 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:50.506 LINK led 00:03:50.506 CC app/spdk_lspci/spdk_lspci.o 00:03:50.506 CXX test/cpp_headers/bdev_module.o 00:03:50.506 CXX test/cpp_headers/bdev_zone.o 00:03:50.506 CC app/spdk_nvme_perf/perf.o 00:03:50.506 CC app/spdk_tgt/spdk_tgt.o 00:03:50.506 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:50.506 CC test/env/memory/memory_ut.o 00:03:50.506 LINK spdk_lspci 00:03:50.765 CXX test/cpp_headers/bit_array.o 00:03:50.765 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:50.765 LINK spdk_tgt 00:03:50.765 LINK env_dpdk_post_init 00:03:50.765 CC test/env/pci/pci_ut.o 00:03:50.765 LINK vhost_fuzz 00:03:50.765 CXX test/cpp_headers/bit_pool.o 00:03:51.023 LINK interrupt_tgt 00:03:51.023 CC examples/thread/thread/thread_ex.o 00:03:51.023 CXX test/cpp_headers/blob_bdev.o 00:03:51.023 CC test/event/event_perf/event_perf.o 00:03:51.281 CC test/nvme/aer/aer.o 00:03:51.281 CC test/nvme/reset/reset.o 00:03:51.281 CC test/accel/dif/dif.o 00:03:51.281 LINK event_perf 00:03:51.281 CXX test/cpp_headers/blobfs_bdev.o 00:03:51.281 LINK pci_ut 00:03:51.281 LINK thread 00:03:51.540 LINK reset 00:03:51.540 CXX test/cpp_headers/blobfs.o 00:03:51.540 LINK aer 00:03:51.540 CC test/event/reactor/reactor.o 00:03:51.798 CXX test/cpp_headers/blob.o 00:03:51.798 LINK spdk_nvme_perf 00:03:51.798 LINK reactor 00:03:51.798 CC test/nvme/sgl/sgl.o 00:03:51.798 CC test/app/stub/stub.o 00:03:51.798 CC examples/sock/hello_world/hello_sock.o 00:03:51.798 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:51.798 CXX test/cpp_headers/conf.o 00:03:52.056 LINK memory_ut 00:03:52.056 CC test/event/reactor_perf/reactor_perf.o 00:03:52.056 LINK stub 00:03:52.056 CC app/spdk_nvme_identify/identify.o 00:03:52.056 CXX test/cpp_headers/config.o 00:03:52.056 CXX test/cpp_headers/cpuset.o 00:03:52.056 LINK hello_sock 00:03:52.056 LINK sgl 00:03:52.314 LINK dif 00:03:52.314 LINK reactor_perf 00:03:52.314 LINK hello_fsdev 00:03:52.314 CXX test/cpp_headers/crc16.o 00:03:52.314 CXX test/cpp_headers/crc32.o 00:03:52.314 LINK iscsi_fuzz 00:03:52.314 CC test/nvme/e2edp/nvme_dp.o 00:03:52.572 CC test/blobfs/mkfs/mkfs.o 00:03:52.572 CC test/event/app_repeat/app_repeat.o 00:03:52.572 CC examples/accel/perf/accel_perf.o 00:03:52.572 CXX test/cpp_headers/crc64.o 00:03:52.572 CC test/nvme/overhead/overhead.o 00:03:52.572 CC test/nvme/err_injection/err_injection.o 00:03:52.572 CC test/nvme/startup/startup.o 00:03:52.572 LINK app_repeat 00:03:52.572 CC test/nvme/reserve/reserve.o 00:03:52.572 LINK mkfs 00:03:52.830 CXX test/cpp_headers/dif.o 00:03:52.830 LINK startup 00:03:52.830 LINK nvme_dp 00:03:52.830 LINK err_injection 00:03:52.830 LINK overhead 00:03:52.830 CXX test/cpp_headers/dma.o 00:03:52.830 LINK reserve 00:03:53.087 CXX test/cpp_headers/endian.o 00:03:53.087 CC test/event/scheduler/scheduler.o 00:03:53.087 CC test/nvme/simple_copy/simple_copy.o 00:03:53.087 CC test/nvme/connect_stress/connect_stress.o 00:03:53.087 CC test/nvme/boot_partition/boot_partition.o 00:03:53.087 CXX test/cpp_headers/env_dpdk.o 00:03:53.087 LINK accel_perf 00:03:53.345 CC test/nvme/compliance/nvme_compliance.o 00:03:53.345 LINK spdk_nvme_identify 00:03:53.345 LINK boot_partition 00:03:53.345 CC test/nvme/fused_ordering/fused_ordering.o 00:03:53.345 LINK connect_stress 00:03:53.345 LINK scheduler 00:03:53.345 CXX test/cpp_headers/env.o 00:03:53.345 CC test/lvol/esnap/esnap.o 00:03:53.345 LINK simple_copy 00:03:53.604 CXX test/cpp_headers/event.o 00:03:53.604 LINK fused_ordering 00:03:53.604 CC app/spdk_nvme_discover/discovery_aer.o 00:03:53.604 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:53.604 CC examples/blob/hello_world/hello_blob.o 00:03:53.604 CC test/nvme/fdp/fdp.o 00:03:53.604 CC test/nvme/cuse/cuse.o 00:03:53.604 LINK nvme_compliance 00:03:53.604 CXX test/cpp_headers/fd_group.o 00:03:53.604 CC test/bdev/bdevio/bdevio.o 00:03:53.862 LINK spdk_nvme_discover 00:03:53.862 LINK doorbell_aers 00:03:53.862 CC examples/blob/cli/blobcli.o 00:03:53.862 CXX test/cpp_headers/fd.o 00:03:53.862 LINK hello_blob 00:03:54.119 CC examples/nvme/hello_world/hello_world.o 00:03:54.119 LINK fdp 00:03:54.119 CC app/spdk_top/spdk_top.o 00:03:54.119 CC examples/nvme/reconnect/reconnect.o 00:03:54.119 CXX test/cpp_headers/file.o 00:03:54.119 LINK bdevio 00:03:54.119 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:54.382 CXX test/cpp_headers/fsdev.o 00:03:54.382 LINK hello_world 00:03:54.382 CXX test/cpp_headers/fsdev_module.o 00:03:54.640 CXX test/cpp_headers/ftl.o 00:03:54.640 LINK blobcli 00:03:54.640 CC examples/nvme/arbitration/arbitration.o 00:03:54.640 LINK reconnect 00:03:54.640 CC examples/nvme/hotplug/hotplug.o 00:03:54.640 CXX test/cpp_headers/fuse_dispatcher.o 00:03:54.640 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:54.640 CXX test/cpp_headers/gpt_spec.o 00:03:54.898 LINK nvme_manage 00:03:54.898 CC app/vhost/vhost.o 00:03:54.898 LINK hotplug 00:03:54.898 CXX test/cpp_headers/hexlify.o 00:03:54.898 LINK arbitration 00:03:54.898 LINK cmb_copy 00:03:55.156 CC app/spdk_dd/spdk_dd.o 00:03:55.156 CXX test/cpp_headers/histogram_data.o 00:03:55.156 LINK vhost 00:03:55.156 CXX test/cpp_headers/idxd.o 00:03:55.156 CC examples/nvme/abort/abort.o 00:03:55.157 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:55.415 LINK spdk_top 00:03:55.415 LINK cuse 00:03:55.415 CC app/fio/nvme/fio_plugin.o 00:03:55.415 CXX test/cpp_headers/idxd_spec.o 00:03:55.415 CC app/fio/bdev/fio_plugin.o 00:03:55.415 LINK pmr_persistence 00:03:55.673 CXX test/cpp_headers/init.o 00:03:55.673 CC examples/bdev/hello_world/hello_bdev.o 00:03:55.673 CXX test/cpp_headers/ioat.o 00:03:55.673 CXX test/cpp_headers/ioat_spec.o 00:03:55.673 LINK spdk_dd 00:03:55.673 CC examples/bdev/bdevperf/bdevperf.o 00:03:55.932 CXX test/cpp_headers/iscsi_spec.o 00:03:55.932 LINK abort 00:03:55.932 CXX test/cpp_headers/json.o 00:03:55.932 CXX test/cpp_headers/jsonrpc.o 00:03:55.932 LINK hello_bdev 00:03:55.932 CXX test/cpp_headers/keyring.o 00:03:55.932 CXX test/cpp_headers/keyring_module.o 00:03:56.190 CXX test/cpp_headers/likely.o 00:03:56.190 CXX test/cpp_headers/log.o 00:03:56.190 CXX test/cpp_headers/lvol.o 00:03:56.190 LINK spdk_bdev 00:03:56.190 CXX test/cpp_headers/md5.o 00:03:56.190 LINK spdk_nvme 00:03:56.190 CXX test/cpp_headers/memory.o 00:03:56.190 CXX test/cpp_headers/mmio.o 00:03:56.190 CXX test/cpp_headers/nbd.o 00:03:56.190 CXX test/cpp_headers/net.o 00:03:56.190 CXX test/cpp_headers/notify.o 00:03:56.190 CXX test/cpp_headers/nvme.o 00:03:56.190 CXX test/cpp_headers/nvme_intel.o 00:03:56.449 CXX test/cpp_headers/nvme_ocssd.o 00:03:56.449 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:56.449 CXX test/cpp_headers/nvme_spec.o 00:03:56.449 CXX test/cpp_headers/nvme_zns.o 00:03:56.449 CXX test/cpp_headers/nvmf_cmd.o 00:03:56.449 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:56.449 CXX test/cpp_headers/nvmf.o 00:03:56.449 CXX test/cpp_headers/nvmf_spec.o 00:03:56.449 CXX test/cpp_headers/nvmf_transport.o 00:03:56.707 CXX test/cpp_headers/opal.o 00:03:56.707 CXX test/cpp_headers/opal_spec.o 00:03:56.707 CXX test/cpp_headers/pci_ids.o 00:03:56.707 CXX test/cpp_headers/pipe.o 00:03:56.707 CXX test/cpp_headers/queue.o 00:03:56.707 CXX test/cpp_headers/reduce.o 00:03:56.707 CXX test/cpp_headers/rpc.o 00:03:56.707 CXX test/cpp_headers/scheduler.o 00:03:56.707 CXX test/cpp_headers/scsi.o 00:03:56.707 CXX test/cpp_headers/scsi_spec.o 00:03:56.707 LINK bdevperf 00:03:56.965 CXX test/cpp_headers/sock.o 00:03:56.965 CXX test/cpp_headers/stdinc.o 00:03:56.965 CXX test/cpp_headers/string.o 00:03:56.965 CXX test/cpp_headers/thread.o 00:03:56.965 CXX test/cpp_headers/trace.o 00:03:56.965 CXX test/cpp_headers/trace_parser.o 00:03:56.965 CXX test/cpp_headers/tree.o 00:03:56.965 CXX test/cpp_headers/ublk.o 00:03:56.965 CXX test/cpp_headers/util.o 00:03:56.965 CXX test/cpp_headers/uuid.o 00:03:56.965 CXX test/cpp_headers/version.o 00:03:56.965 CXX test/cpp_headers/vfio_user_pci.o 00:03:56.965 CXX test/cpp_headers/vfio_user_spec.o 00:03:56.965 CXX test/cpp_headers/vhost.o 00:03:56.965 CXX test/cpp_headers/vmd.o 00:03:57.224 CXX test/cpp_headers/xor.o 00:03:57.224 CXX test/cpp_headers/zipf.o 00:03:57.224 CC examples/nvmf/nvmf/nvmf.o 00:03:57.790 LINK nvmf 00:04:01.091 LINK esnap 00:04:01.091 00:04:01.091 real 1m39.975s 00:04:01.091 user 9m30.449s 00:04:01.091 sys 1m38.620s 00:04:01.091 23:48:07 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:01.091 23:48:07 make -- common/autotest_common.sh@10 -- $ set +x 00:04:01.091 ************************************ 00:04:01.091 END TEST make 00:04:01.091 ************************************ 00:04:01.091 23:48:07 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:01.091 23:48:07 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:01.091 23:48:07 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:01.091 23:48:07 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:01.091 23:48:07 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:01.091 23:48:07 -- pm/common@44 -- $ pid=5293 00:04:01.091 23:48:07 -- pm/common@50 -- $ kill -TERM 5293 00:04:01.091 23:48:07 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:01.091 23:48:07 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:01.091 23:48:07 -- pm/common@44 -- $ pid=5294 00:04:01.091 23:48:07 -- pm/common@50 -- $ kill -TERM 5294 00:04:01.091 23:48:07 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:04:01.091 23:48:07 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:01.350 23:48:07 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:01.350 23:48:07 -- common/autotest_common.sh@1693 -- # lcov --version 00:04:01.350 23:48:07 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:01.350 23:48:07 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:01.350 23:48:07 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:01.350 23:48:07 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:01.350 23:48:07 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:01.351 23:48:07 -- scripts/common.sh@336 -- # IFS=.-: 00:04:01.351 23:48:07 -- scripts/common.sh@336 -- # read -ra ver1 00:04:01.351 23:48:07 -- scripts/common.sh@337 -- # IFS=.-: 00:04:01.351 23:48:07 -- scripts/common.sh@337 -- # read -ra ver2 00:04:01.351 23:48:07 -- scripts/common.sh@338 -- # local 'op=<' 00:04:01.351 23:48:07 -- scripts/common.sh@340 -- # ver1_l=2 00:04:01.351 23:48:07 -- scripts/common.sh@341 -- # ver2_l=1 00:04:01.351 23:48:07 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:01.351 23:48:07 -- scripts/common.sh@344 -- # case "$op" in 00:04:01.351 23:48:07 -- scripts/common.sh@345 -- # : 1 00:04:01.351 23:48:07 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:01.351 23:48:07 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:01.351 23:48:07 -- scripts/common.sh@365 -- # decimal 1 00:04:01.351 23:48:07 -- scripts/common.sh@353 -- # local d=1 00:04:01.351 23:48:07 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:01.351 23:48:07 -- scripts/common.sh@355 -- # echo 1 00:04:01.351 23:48:07 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:01.351 23:48:07 -- scripts/common.sh@366 -- # decimal 2 00:04:01.351 23:48:07 -- scripts/common.sh@353 -- # local d=2 00:04:01.351 23:48:07 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:01.351 23:48:07 -- scripts/common.sh@355 -- # echo 2 00:04:01.351 23:48:07 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:01.351 23:48:07 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:01.351 23:48:07 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:01.351 23:48:07 -- scripts/common.sh@368 -- # return 0 00:04:01.351 23:48:07 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:01.351 23:48:07 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:01.351 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:01.351 --rc genhtml_branch_coverage=1 00:04:01.351 --rc genhtml_function_coverage=1 00:04:01.351 --rc genhtml_legend=1 00:04:01.351 --rc geninfo_all_blocks=1 00:04:01.351 --rc geninfo_unexecuted_blocks=1 00:04:01.351 00:04:01.351 ' 00:04:01.351 23:48:07 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:01.351 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:01.351 --rc genhtml_branch_coverage=1 00:04:01.351 --rc genhtml_function_coverage=1 00:04:01.351 --rc genhtml_legend=1 00:04:01.351 --rc geninfo_all_blocks=1 00:04:01.351 --rc geninfo_unexecuted_blocks=1 00:04:01.351 00:04:01.351 ' 00:04:01.351 23:48:07 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:01.351 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:01.351 --rc genhtml_branch_coverage=1 00:04:01.351 --rc genhtml_function_coverage=1 00:04:01.351 --rc genhtml_legend=1 00:04:01.351 --rc geninfo_all_blocks=1 00:04:01.351 --rc geninfo_unexecuted_blocks=1 00:04:01.351 00:04:01.351 ' 00:04:01.351 23:48:07 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:01.351 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:01.351 --rc genhtml_branch_coverage=1 00:04:01.351 --rc genhtml_function_coverage=1 00:04:01.351 --rc genhtml_legend=1 00:04:01.351 --rc geninfo_all_blocks=1 00:04:01.351 --rc geninfo_unexecuted_blocks=1 00:04:01.351 00:04:01.351 ' 00:04:01.351 23:48:07 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:01.351 23:48:07 -- nvmf/common.sh@7 -- # uname -s 00:04:01.351 23:48:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:01.351 23:48:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:01.351 23:48:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:01.351 23:48:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:01.351 23:48:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:01.351 23:48:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:01.351 23:48:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:01.351 23:48:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:01.351 23:48:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:01.351 23:48:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:01.351 23:48:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:04:01.351 23:48:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:04:01.351 23:48:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:01.351 23:48:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:01.351 23:48:07 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:04:01.351 23:48:07 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:01.351 23:48:07 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:01.351 23:48:07 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:01.351 23:48:07 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:01.351 23:48:07 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:01.351 23:48:07 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:01.351 23:48:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:01.351 23:48:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:01.351 23:48:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:01.351 23:48:07 -- paths/export.sh@5 -- # export PATH 00:04:01.351 23:48:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:01.351 23:48:07 -- nvmf/common.sh@51 -- # : 0 00:04:01.351 23:48:07 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:01.351 23:48:07 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:01.351 23:48:07 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:01.351 23:48:07 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:01.351 23:48:07 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:01.351 23:48:07 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:01.351 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:01.351 23:48:07 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:01.351 23:48:07 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:01.351 23:48:07 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:01.351 23:48:07 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:01.351 23:48:07 -- spdk/autotest.sh@32 -- # uname -s 00:04:01.351 23:48:07 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:01.351 23:48:08 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:01.351 23:48:08 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:01.351 23:48:08 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:01.351 23:48:08 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:01.351 23:48:08 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:01.610 23:48:08 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:01.610 23:48:08 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:01.610 23:48:08 -- spdk/autotest.sh@48 -- # udevadm_pid=55040 00:04:01.610 23:48:08 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:01.610 23:48:08 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:01.610 23:48:08 -- pm/common@17 -- # local monitor 00:04:01.610 23:48:08 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:01.610 23:48:08 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:01.610 23:48:08 -- pm/common@25 -- # sleep 1 00:04:01.610 23:48:08 -- pm/common@21 -- # date +%s 00:04:01.610 23:48:08 -- pm/common@21 -- # date +%s 00:04:01.610 23:48:08 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1731973688 00:04:01.610 23:48:08 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1731973688 00:04:01.610 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1731973688_collect-cpu-load.pm.log 00:04:01.610 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1731973688_collect-vmstat.pm.log 00:04:02.547 23:48:09 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:02.547 23:48:09 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:02.547 23:48:09 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:02.547 23:48:09 -- common/autotest_common.sh@10 -- # set +x 00:04:02.547 23:48:09 -- spdk/autotest.sh@59 -- # create_test_list 00:04:02.547 23:48:09 -- common/autotest_common.sh@752 -- # xtrace_disable 00:04:02.547 23:48:09 -- common/autotest_common.sh@10 -- # set +x 00:04:02.547 23:48:09 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:02.547 23:48:09 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:02.547 23:48:09 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:02.547 23:48:09 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:02.547 23:48:09 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:02.547 23:48:09 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:02.547 23:48:09 -- common/autotest_common.sh@1457 -- # uname 00:04:02.547 23:48:09 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:04:02.547 23:48:09 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:02.547 23:48:09 -- common/autotest_common.sh@1477 -- # uname 00:04:02.547 23:48:09 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:04:02.547 23:48:09 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:02.547 23:48:09 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:02.547 lcov: LCOV version 1.15 00:04:02.547 23:48:09 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:20.638 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:20.638 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:35.539 23:48:40 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:35.539 23:48:40 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:35.539 23:48:40 -- common/autotest_common.sh@10 -- # set +x 00:04:35.539 23:48:40 -- spdk/autotest.sh@78 -- # rm -f 00:04:35.539 23:48:40 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:35.539 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:35.539 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:35.539 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:35.539 23:48:41 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:35.539 23:48:41 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:04:35.539 23:48:41 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:04:35.539 23:48:41 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:04:35.539 23:48:41 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:35.539 23:48:41 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:04:35.539 23:48:41 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:04:35.539 23:48:41 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:35.539 23:48:41 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:35.539 23:48:41 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:35.539 23:48:41 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:04:35.539 23:48:41 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:04:35.539 23:48:41 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:35.539 23:48:41 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:35.539 23:48:41 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:35.539 23:48:41 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n2 00:04:35.539 23:48:41 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:04:35.539 23:48:41 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:35.539 23:48:41 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:35.539 23:48:41 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:35.539 23:48:41 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n3 00:04:35.539 23:48:41 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:04:35.539 23:48:41 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:35.539 23:48:41 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:35.539 23:48:41 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:35.539 23:48:41 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:35.539 23:48:41 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:35.539 23:48:41 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:35.539 23:48:41 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:35.539 23:48:41 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:35.539 No valid GPT data, bailing 00:04:35.539 23:48:41 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:35.539 23:48:41 -- scripts/common.sh@394 -- # pt= 00:04:35.539 23:48:41 -- scripts/common.sh@395 -- # return 1 00:04:35.539 23:48:41 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:35.539 1+0 records in 00:04:35.539 1+0 records out 00:04:35.539 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00380787 s, 275 MB/s 00:04:35.539 23:48:41 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:35.539 23:48:41 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:35.539 23:48:41 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:04:35.539 23:48:41 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:04:35.539 23:48:41 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:35.539 No valid GPT data, bailing 00:04:35.539 23:48:41 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:35.539 23:48:41 -- scripts/common.sh@394 -- # pt= 00:04:35.539 23:48:41 -- scripts/common.sh@395 -- # return 1 00:04:35.539 23:48:41 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:35.539 1+0 records in 00:04:35.539 1+0 records out 00:04:35.539 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00419717 s, 250 MB/s 00:04:35.539 23:48:41 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:35.539 23:48:41 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:35.539 23:48:41 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:04:35.539 23:48:41 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:04:35.539 23:48:41 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:04:35.539 No valid GPT data, bailing 00:04:35.539 23:48:41 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:35.539 23:48:41 -- scripts/common.sh@394 -- # pt= 00:04:35.539 23:48:41 -- scripts/common.sh@395 -- # return 1 00:04:35.539 23:48:41 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:04:35.539 1+0 records in 00:04:35.539 1+0 records out 00:04:35.539 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00435821 s, 241 MB/s 00:04:35.539 23:48:41 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:35.539 23:48:41 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:35.539 23:48:41 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:04:35.539 23:48:41 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:04:35.539 23:48:41 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:04:35.539 No valid GPT data, bailing 00:04:35.539 23:48:41 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:35.539 23:48:41 -- scripts/common.sh@394 -- # pt= 00:04:35.539 23:48:41 -- scripts/common.sh@395 -- # return 1 00:04:35.539 23:48:41 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:04:35.539 1+0 records in 00:04:35.539 1+0 records out 00:04:35.539 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00436749 s, 240 MB/s 00:04:35.539 23:48:41 -- spdk/autotest.sh@105 -- # sync 00:04:35.539 23:48:41 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:35.539 23:48:41 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:35.539 23:48:41 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:37.444 23:48:43 -- spdk/autotest.sh@111 -- # uname -s 00:04:37.444 23:48:43 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:37.444 23:48:43 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:37.444 23:48:43 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:37.703 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:37.703 Hugepages 00:04:37.703 node hugesize free / total 00:04:37.703 node0 1048576kB 0 / 0 00:04:37.703 node0 2048kB 0 / 0 00:04:37.703 00:04:37.703 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:37.962 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:37.962 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:37.962 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:04:37.962 23:48:44 -- spdk/autotest.sh@117 -- # uname -s 00:04:37.962 23:48:44 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:37.962 23:48:44 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:37.962 23:48:44 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:38.898 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:38.898 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:38.898 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:38.898 23:48:45 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:39.833 23:48:46 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:39.833 23:48:46 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:39.833 23:48:46 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:39.833 23:48:46 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:39.833 23:48:46 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:39.833 23:48:46 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:39.833 23:48:46 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:39.833 23:48:46 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:39.833 23:48:46 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:39.833 23:48:46 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:04:39.833 23:48:46 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:39.833 23:48:46 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:40.401 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:40.401 Waiting for block devices as requested 00:04:40.401 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:40.401 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:40.660 23:48:47 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:40.660 23:48:47 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:40.660 23:48:47 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:40.660 23:48:47 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:04:40.660 23:48:47 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:40.660 23:48:47 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:40.660 23:48:47 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:40.660 23:48:47 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:04:40.660 23:48:47 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:04:40.660 23:48:47 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:04:40.660 23:48:47 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:04:40.660 23:48:47 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:40.660 23:48:47 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:40.660 23:48:47 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:40.660 23:48:47 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:40.660 23:48:47 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:40.660 23:48:47 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:04:40.660 23:48:47 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:40.660 23:48:47 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:40.660 23:48:47 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:40.660 23:48:47 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:40.660 23:48:47 -- common/autotest_common.sh@1543 -- # continue 00:04:40.660 23:48:47 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:40.660 23:48:47 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:40.660 23:48:47 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:40.660 23:48:47 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:04:40.660 23:48:47 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:40.660 23:48:47 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:40.660 23:48:47 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:40.660 23:48:47 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:40.660 23:48:47 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:40.660 23:48:47 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:40.660 23:48:47 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:40.660 23:48:47 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:40.660 23:48:47 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:40.660 23:48:47 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:40.660 23:48:47 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:40.660 23:48:47 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:40.660 23:48:47 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:40.660 23:48:47 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:40.660 23:48:47 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:40.660 23:48:47 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:40.660 23:48:47 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:40.660 23:48:47 -- common/autotest_common.sh@1543 -- # continue 00:04:40.660 23:48:47 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:40.660 23:48:47 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:40.660 23:48:47 -- common/autotest_common.sh@10 -- # set +x 00:04:40.660 23:48:47 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:40.660 23:48:47 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:40.660 23:48:47 -- common/autotest_common.sh@10 -- # set +x 00:04:40.660 23:48:47 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:41.227 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:41.487 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:41.487 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:41.487 23:48:48 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:41.487 23:48:48 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:41.487 23:48:48 -- common/autotest_common.sh@10 -- # set +x 00:04:41.487 23:48:48 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:41.487 23:48:48 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:41.487 23:48:48 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:41.487 23:48:48 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:41.487 23:48:48 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:41.487 23:48:48 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:41.487 23:48:48 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:41.487 23:48:48 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:41.487 23:48:48 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:41.487 23:48:48 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:41.487 23:48:48 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:41.487 23:48:48 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:41.487 23:48:48 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:41.487 23:48:48 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:04:41.487 23:48:48 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:41.487 23:48:48 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:41.487 23:48:48 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:41.487 23:48:48 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:41.487 23:48:48 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:41.487 23:48:48 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:41.487 23:48:48 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:41.487 23:48:48 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:41.487 23:48:48 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:41.487 23:48:48 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:04:41.487 23:48:48 -- common/autotest_common.sh@1572 -- # return 0 00:04:41.487 23:48:48 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:04:41.487 23:48:48 -- common/autotest_common.sh@1580 -- # return 0 00:04:41.487 23:48:48 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:41.487 23:48:48 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:41.487 23:48:48 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:41.487 23:48:48 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:41.487 23:48:48 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:41.487 23:48:48 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:41.487 23:48:48 -- common/autotest_common.sh@10 -- # set +x 00:04:41.487 23:48:48 -- spdk/autotest.sh@151 -- # [[ 1 -eq 1 ]] 00:04:41.487 23:48:48 -- spdk/autotest.sh@152 -- # export SPDK_SOCK_IMPL_DEFAULT=uring 00:04:41.487 23:48:48 -- spdk/autotest.sh@152 -- # SPDK_SOCK_IMPL_DEFAULT=uring 00:04:41.487 23:48:48 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:41.487 23:48:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:41.487 23:48:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:41.487 23:48:48 -- common/autotest_common.sh@10 -- # set +x 00:04:41.747 ************************************ 00:04:41.747 START TEST env 00:04:41.747 ************************************ 00:04:41.747 23:48:48 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:41.747 * Looking for test storage... 00:04:41.747 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:41.747 23:48:48 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:41.747 23:48:48 env -- common/autotest_common.sh@1693 -- # lcov --version 00:04:41.747 23:48:48 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:41.747 23:48:48 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:41.747 23:48:48 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:41.747 23:48:48 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:41.747 23:48:48 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:41.747 23:48:48 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:41.747 23:48:48 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:41.747 23:48:48 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:41.747 23:48:48 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:41.747 23:48:48 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:41.747 23:48:48 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:41.747 23:48:48 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:41.747 23:48:48 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:41.747 23:48:48 env -- scripts/common.sh@344 -- # case "$op" in 00:04:41.747 23:48:48 env -- scripts/common.sh@345 -- # : 1 00:04:41.747 23:48:48 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:41.747 23:48:48 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:41.747 23:48:48 env -- scripts/common.sh@365 -- # decimal 1 00:04:41.747 23:48:48 env -- scripts/common.sh@353 -- # local d=1 00:04:41.747 23:48:48 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:41.747 23:48:48 env -- scripts/common.sh@355 -- # echo 1 00:04:41.747 23:48:48 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:41.747 23:48:48 env -- scripts/common.sh@366 -- # decimal 2 00:04:41.747 23:48:48 env -- scripts/common.sh@353 -- # local d=2 00:04:41.747 23:48:48 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:41.747 23:48:48 env -- scripts/common.sh@355 -- # echo 2 00:04:41.747 23:48:48 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:41.747 23:48:48 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:41.747 23:48:48 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:41.747 23:48:48 env -- scripts/common.sh@368 -- # return 0 00:04:41.747 23:48:48 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:41.747 23:48:48 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:41.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.747 --rc genhtml_branch_coverage=1 00:04:41.747 --rc genhtml_function_coverage=1 00:04:41.747 --rc genhtml_legend=1 00:04:41.747 --rc geninfo_all_blocks=1 00:04:41.747 --rc geninfo_unexecuted_blocks=1 00:04:41.747 00:04:41.747 ' 00:04:41.747 23:48:48 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:41.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.747 --rc genhtml_branch_coverage=1 00:04:41.747 --rc genhtml_function_coverage=1 00:04:41.747 --rc genhtml_legend=1 00:04:41.747 --rc geninfo_all_blocks=1 00:04:41.747 --rc geninfo_unexecuted_blocks=1 00:04:41.747 00:04:41.747 ' 00:04:41.747 23:48:48 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:41.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.747 --rc genhtml_branch_coverage=1 00:04:41.747 --rc genhtml_function_coverage=1 00:04:41.747 --rc genhtml_legend=1 00:04:41.747 --rc geninfo_all_blocks=1 00:04:41.747 --rc geninfo_unexecuted_blocks=1 00:04:41.747 00:04:41.747 ' 00:04:41.747 23:48:48 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:41.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.747 --rc genhtml_branch_coverage=1 00:04:41.747 --rc genhtml_function_coverage=1 00:04:41.747 --rc genhtml_legend=1 00:04:41.747 --rc geninfo_all_blocks=1 00:04:41.747 --rc geninfo_unexecuted_blocks=1 00:04:41.747 00:04:41.747 ' 00:04:41.747 23:48:48 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:41.747 23:48:48 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:41.747 23:48:48 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:41.747 23:48:48 env -- common/autotest_common.sh@10 -- # set +x 00:04:41.747 ************************************ 00:04:41.747 START TEST env_memory 00:04:41.747 ************************************ 00:04:41.747 23:48:48 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:41.747 00:04:41.747 00:04:41.747 CUnit - A unit testing framework for C - Version 2.1-3 00:04:41.747 http://cunit.sourceforge.net/ 00:04:41.747 00:04:41.747 00:04:41.747 Suite: memory 00:04:42.006 Test: alloc and free memory map ...[2024-11-18 23:48:48.457942] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:42.006 passed 00:04:42.006 Test: mem map translation ...[2024-11-18 23:48:48.519262] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:42.006 [2024-11-18 23:48:48.519336] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:42.006 [2024-11-18 23:48:48.519432] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:42.006 [2024-11-18 23:48:48.519464] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:42.006 passed 00:04:42.006 Test: mem map registration ...[2024-11-18 23:48:48.617961] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:42.006 [2024-11-18 23:48:48.618030] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:42.006 passed 00:04:42.265 Test: mem map adjacent registrations ...passed 00:04:42.265 00:04:42.265 Run Summary: Type Total Ran Passed Failed Inactive 00:04:42.265 suites 1 1 n/a 0 0 00:04:42.265 tests 4 4 4 0 0 00:04:42.265 asserts 152 152 152 0 n/a 00:04:42.265 00:04:42.265 Elapsed time = 0.344 seconds 00:04:42.265 00:04:42.265 real 0m0.384s 00:04:42.265 user 0m0.360s 00:04:42.265 sys 0m0.019s 00:04:42.265 23:48:48 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:42.265 23:48:48 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:42.266 ************************************ 00:04:42.266 END TEST env_memory 00:04:42.266 ************************************ 00:04:42.266 23:48:48 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:42.266 23:48:48 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:42.266 23:48:48 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:42.266 23:48:48 env -- common/autotest_common.sh@10 -- # set +x 00:04:42.266 ************************************ 00:04:42.266 START TEST env_vtophys 00:04:42.266 ************************************ 00:04:42.266 23:48:48 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:42.266 EAL: lib.eal log level changed from notice to debug 00:04:42.266 EAL: Detected lcore 0 as core 0 on socket 0 00:04:42.266 EAL: Detected lcore 1 as core 0 on socket 0 00:04:42.266 EAL: Detected lcore 2 as core 0 on socket 0 00:04:42.266 EAL: Detected lcore 3 as core 0 on socket 0 00:04:42.266 EAL: Detected lcore 4 as core 0 on socket 0 00:04:42.266 EAL: Detected lcore 5 as core 0 on socket 0 00:04:42.266 EAL: Detected lcore 6 as core 0 on socket 0 00:04:42.266 EAL: Detected lcore 7 as core 0 on socket 0 00:04:42.266 EAL: Detected lcore 8 as core 0 on socket 0 00:04:42.266 EAL: Detected lcore 9 as core 0 on socket 0 00:04:42.266 EAL: Maximum logical cores by configuration: 128 00:04:42.266 EAL: Detected CPU lcores: 10 00:04:42.266 EAL: Detected NUMA nodes: 1 00:04:42.266 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:42.266 EAL: Detected shared linkage of DPDK 00:04:42.266 EAL: No shared files mode enabled, IPC will be disabled 00:04:42.266 EAL: Selected IOVA mode 'PA' 00:04:42.266 EAL: Probing VFIO support... 00:04:42.266 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:42.266 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:42.266 EAL: Ask a virtual area of 0x2e000 bytes 00:04:42.266 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:42.266 EAL: Setting up physically contiguous memory... 00:04:42.266 EAL: Setting maximum number of open files to 524288 00:04:42.266 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:42.266 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:42.266 EAL: Ask a virtual area of 0x61000 bytes 00:04:42.266 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:42.266 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:42.266 EAL: Ask a virtual area of 0x400000000 bytes 00:04:42.266 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:42.266 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:42.266 EAL: Ask a virtual area of 0x61000 bytes 00:04:42.266 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:42.266 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:42.266 EAL: Ask a virtual area of 0x400000000 bytes 00:04:42.266 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:42.266 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:42.266 EAL: Ask a virtual area of 0x61000 bytes 00:04:42.266 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:42.266 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:42.266 EAL: Ask a virtual area of 0x400000000 bytes 00:04:42.266 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:42.266 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:42.266 EAL: Ask a virtual area of 0x61000 bytes 00:04:42.266 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:42.266 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:42.266 EAL: Ask a virtual area of 0x400000000 bytes 00:04:42.266 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:42.266 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:42.266 EAL: Hugepages will be freed exactly as allocated. 00:04:42.266 EAL: No shared files mode enabled, IPC is disabled 00:04:42.266 EAL: No shared files mode enabled, IPC is disabled 00:04:42.525 EAL: TSC frequency is ~2200000 KHz 00:04:42.525 EAL: Main lcore 0 is ready (tid=7f8581ac7a40;cpuset=[0]) 00:04:42.525 EAL: Trying to obtain current memory policy. 00:04:42.525 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.525 EAL: Restoring previous memory policy: 0 00:04:42.525 EAL: request: mp_malloc_sync 00:04:42.525 EAL: No shared files mode enabled, IPC is disabled 00:04:42.525 EAL: Heap on socket 0 was expanded by 2MB 00:04:42.525 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:42.525 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:42.525 EAL: Mem event callback 'spdk:(nil)' registered 00:04:42.525 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:42.525 00:04:42.525 00:04:42.525 CUnit - A unit testing framework for C - Version 2.1-3 00:04:42.525 http://cunit.sourceforge.net/ 00:04:42.525 00:04:42.525 00:04:42.525 Suite: components_suite 00:04:42.784 Test: vtophys_malloc_test ...passed 00:04:42.784 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:42.784 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.784 EAL: Restoring previous memory policy: 4 00:04:42.784 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.784 EAL: request: mp_malloc_sync 00:04:42.784 EAL: No shared files mode enabled, IPC is disabled 00:04:42.784 EAL: Heap on socket 0 was expanded by 4MB 00:04:42.784 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.784 EAL: request: mp_malloc_sync 00:04:42.784 EAL: No shared files mode enabled, IPC is disabled 00:04:42.784 EAL: Heap on socket 0 was shrunk by 4MB 00:04:42.784 EAL: Trying to obtain current memory policy. 00:04:42.784 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.784 EAL: Restoring previous memory policy: 4 00:04:42.784 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.784 EAL: request: mp_malloc_sync 00:04:42.784 EAL: No shared files mode enabled, IPC is disabled 00:04:42.784 EAL: Heap on socket 0 was expanded by 6MB 00:04:42.784 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.784 EAL: request: mp_malloc_sync 00:04:42.784 EAL: No shared files mode enabled, IPC is disabled 00:04:42.784 EAL: Heap on socket 0 was shrunk by 6MB 00:04:42.784 EAL: Trying to obtain current memory policy. 00:04:42.784 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.784 EAL: Restoring previous memory policy: 4 00:04:42.784 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.784 EAL: request: mp_malloc_sync 00:04:42.784 EAL: No shared files mode enabled, IPC is disabled 00:04:42.784 EAL: Heap on socket 0 was expanded by 10MB 00:04:43.042 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.042 EAL: request: mp_malloc_sync 00:04:43.042 EAL: No shared files mode enabled, IPC is disabled 00:04:43.042 EAL: Heap on socket 0 was shrunk by 10MB 00:04:43.042 EAL: Trying to obtain current memory policy. 00:04:43.042 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:43.042 EAL: Restoring previous memory policy: 4 00:04:43.042 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.042 EAL: request: mp_malloc_sync 00:04:43.042 EAL: No shared files mode enabled, IPC is disabled 00:04:43.042 EAL: Heap on socket 0 was expanded by 18MB 00:04:43.042 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.042 EAL: request: mp_malloc_sync 00:04:43.042 EAL: No shared files mode enabled, IPC is disabled 00:04:43.042 EAL: Heap on socket 0 was shrunk by 18MB 00:04:43.042 EAL: Trying to obtain current memory policy. 00:04:43.042 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:43.043 EAL: Restoring previous memory policy: 4 00:04:43.043 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.043 EAL: request: mp_malloc_sync 00:04:43.043 EAL: No shared files mode enabled, IPC is disabled 00:04:43.043 EAL: Heap on socket 0 was expanded by 34MB 00:04:43.043 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.043 EAL: request: mp_malloc_sync 00:04:43.043 EAL: No shared files mode enabled, IPC is disabled 00:04:43.043 EAL: Heap on socket 0 was shrunk by 34MB 00:04:43.043 EAL: Trying to obtain current memory policy. 00:04:43.043 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:43.043 EAL: Restoring previous memory policy: 4 00:04:43.043 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.043 EAL: request: mp_malloc_sync 00:04:43.043 EAL: No shared files mode enabled, IPC is disabled 00:04:43.043 EAL: Heap on socket 0 was expanded by 66MB 00:04:43.043 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.043 EAL: request: mp_malloc_sync 00:04:43.043 EAL: No shared files mode enabled, IPC is disabled 00:04:43.043 EAL: Heap on socket 0 was shrunk by 66MB 00:04:43.302 EAL: Trying to obtain current memory policy. 00:04:43.302 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:43.302 EAL: Restoring previous memory policy: 4 00:04:43.302 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.302 EAL: request: mp_malloc_sync 00:04:43.302 EAL: No shared files mode enabled, IPC is disabled 00:04:43.302 EAL: Heap on socket 0 was expanded by 130MB 00:04:43.302 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.302 EAL: request: mp_malloc_sync 00:04:43.302 EAL: No shared files mode enabled, IPC is disabled 00:04:43.302 EAL: Heap on socket 0 was shrunk by 130MB 00:04:43.561 EAL: Trying to obtain current memory policy. 00:04:43.561 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:43.561 EAL: Restoring previous memory policy: 4 00:04:43.561 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.561 EAL: request: mp_malloc_sync 00:04:43.561 EAL: No shared files mode enabled, IPC is disabled 00:04:43.561 EAL: Heap on socket 0 was expanded by 258MB 00:04:43.821 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.080 EAL: request: mp_malloc_sync 00:04:44.080 EAL: No shared files mode enabled, IPC is disabled 00:04:44.080 EAL: Heap on socket 0 was shrunk by 258MB 00:04:44.338 EAL: Trying to obtain current memory policy. 00:04:44.338 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:44.338 EAL: Restoring previous memory policy: 4 00:04:44.338 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.338 EAL: request: mp_malloc_sync 00:04:44.338 EAL: No shared files mode enabled, IPC is disabled 00:04:44.338 EAL: Heap on socket 0 was expanded by 514MB 00:04:44.907 EAL: Calling mem event callback 'spdk:(nil)' 00:04:45.166 EAL: request: mp_malloc_sync 00:04:45.166 EAL: No shared files mode enabled, IPC is disabled 00:04:45.166 EAL: Heap on socket 0 was shrunk by 514MB 00:04:45.735 EAL: Trying to obtain current memory policy. 00:04:45.735 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:45.735 EAL: Restoring previous memory policy: 4 00:04:45.735 EAL: Calling mem event callback 'spdk:(nil)' 00:04:45.735 EAL: request: mp_malloc_sync 00:04:45.735 EAL: No shared files mode enabled, IPC is disabled 00:04:45.735 EAL: Heap on socket 0 was expanded by 1026MB 00:04:47.111 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.111 EAL: request: mp_malloc_sync 00:04:47.111 EAL: No shared files mode enabled, IPC is disabled 00:04:47.111 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:48.490 passed 00:04:48.490 00:04:48.490 Run Summary: Type Total Ran Passed Failed Inactive 00:04:48.490 suites 1 1 n/a 0 0 00:04:48.490 tests 2 2 2 0 0 00:04:48.490 asserts 5705 5705 5705 0 n/a 00:04:48.490 00:04:48.490 Elapsed time = 5.767 seconds 00:04:48.490 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.490 EAL: request: mp_malloc_sync 00:04:48.490 EAL: No shared files mode enabled, IPC is disabled 00:04:48.490 EAL: Heap on socket 0 was shrunk by 2MB 00:04:48.490 EAL: No shared files mode enabled, IPC is disabled 00:04:48.490 EAL: No shared files mode enabled, IPC is disabled 00:04:48.490 EAL: No shared files mode enabled, IPC is disabled 00:04:48.490 00:04:48.490 real 0m6.086s 00:04:48.490 user 0m5.283s 00:04:48.490 sys 0m0.656s 00:04:48.490 23:48:54 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:48.490 23:48:54 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:48.490 ************************************ 00:04:48.490 END TEST env_vtophys 00:04:48.490 ************************************ 00:04:48.490 23:48:54 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:48.490 23:48:54 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:48.490 23:48:54 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:48.490 23:48:54 env -- common/autotest_common.sh@10 -- # set +x 00:04:48.490 ************************************ 00:04:48.490 START TEST env_pci 00:04:48.490 ************************************ 00:04:48.490 23:48:54 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:48.490 00:04:48.490 00:04:48.490 CUnit - A unit testing framework for C - Version 2.1-3 00:04:48.490 http://cunit.sourceforge.net/ 00:04:48.490 00:04:48.490 00:04:48.490 Suite: pci 00:04:48.490 Test: pci_hook ...[2024-11-18 23:48:54.999253] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 57332 has claimed it 00:04:48.490 passed 00:04:48.490 00:04:48.490 Run Summary: Type Total Ran Passed Failed Inactive 00:04:48.490 suites 1 1 n/a 0 0 00:04:48.490 tests 1 1 1 0 0 00:04:48.490 asserts 25 25 25 0 n/a 00:04:48.490 00:04:48.490 Elapsed time = 0.006 secondsEAL: Cannot find device (10000:00:01.0) 00:04:48.490 EAL: Failed to attach device on primary process 00:04:48.490 00:04:48.490 00:04:48.490 real 0m0.077s 00:04:48.490 user 0m0.041s 00:04:48.490 sys 0m0.035s 00:04:48.490 23:48:55 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:48.490 23:48:55 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:48.490 ************************************ 00:04:48.490 END TEST env_pci 00:04:48.490 ************************************ 00:04:48.490 23:48:55 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:48.490 23:48:55 env -- env/env.sh@15 -- # uname 00:04:48.490 23:48:55 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:48.490 23:48:55 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:48.490 23:48:55 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:48.490 23:48:55 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:48.490 23:48:55 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:48.490 23:48:55 env -- common/autotest_common.sh@10 -- # set +x 00:04:48.490 ************************************ 00:04:48.490 START TEST env_dpdk_post_init 00:04:48.490 ************************************ 00:04:48.490 23:48:55 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:48.490 EAL: Detected CPU lcores: 10 00:04:48.490 EAL: Detected NUMA nodes: 1 00:04:48.490 EAL: Detected shared linkage of DPDK 00:04:48.749 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:48.749 EAL: Selected IOVA mode 'PA' 00:04:48.749 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:48.749 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:48.749 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:48.749 Starting DPDK initialization... 00:04:48.749 Starting SPDK post initialization... 00:04:48.749 SPDK NVMe probe 00:04:48.749 Attaching to 0000:00:10.0 00:04:48.749 Attaching to 0000:00:11.0 00:04:48.749 Attached to 0000:00:10.0 00:04:48.749 Attached to 0000:00:11.0 00:04:48.749 Cleaning up... 00:04:48.749 00:04:48.749 real 0m0.282s 00:04:48.749 user 0m0.094s 00:04:48.749 sys 0m0.087s 00:04:48.749 23:48:55 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:48.749 23:48:55 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:48.749 ************************************ 00:04:48.749 END TEST env_dpdk_post_init 00:04:48.749 ************************************ 00:04:48.749 23:48:55 env -- env/env.sh@26 -- # uname 00:04:48.749 23:48:55 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:48.749 23:48:55 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:48.749 23:48:55 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:48.749 23:48:55 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:48.749 23:48:55 env -- common/autotest_common.sh@10 -- # set +x 00:04:49.009 ************************************ 00:04:49.009 START TEST env_mem_callbacks 00:04:49.009 ************************************ 00:04:49.009 23:48:55 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:49.009 EAL: Detected CPU lcores: 10 00:04:49.009 EAL: Detected NUMA nodes: 1 00:04:49.009 EAL: Detected shared linkage of DPDK 00:04:49.009 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:49.009 EAL: Selected IOVA mode 'PA' 00:04:49.009 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:49.009 00:04:49.009 00:04:49.009 CUnit - A unit testing framework for C - Version 2.1-3 00:04:49.009 http://cunit.sourceforge.net/ 00:04:49.009 00:04:49.009 00:04:49.009 Suite: memory 00:04:49.009 Test: test ... 00:04:49.009 register 0x200000200000 2097152 00:04:49.009 malloc 3145728 00:04:49.009 register 0x200000400000 4194304 00:04:49.009 buf 0x2000004fffc0 len 3145728 PASSED 00:04:49.009 malloc 64 00:04:49.009 buf 0x2000004ffec0 len 64 PASSED 00:04:49.009 malloc 4194304 00:04:49.009 register 0x200000800000 6291456 00:04:49.009 buf 0x2000009fffc0 len 4194304 PASSED 00:04:49.009 free 0x2000004fffc0 3145728 00:04:49.009 free 0x2000004ffec0 64 00:04:49.009 unregister 0x200000400000 4194304 PASSED 00:04:49.009 free 0x2000009fffc0 4194304 00:04:49.009 unregister 0x200000800000 6291456 PASSED 00:04:49.009 malloc 8388608 00:04:49.009 register 0x200000400000 10485760 00:04:49.009 buf 0x2000005fffc0 len 8388608 PASSED 00:04:49.009 free 0x2000005fffc0 8388608 00:04:49.009 unregister 0x200000400000 10485760 PASSED 00:04:49.009 passed 00:04:49.009 00:04:49.009 Run Summary: Type Total Ran Passed Failed Inactive 00:04:49.009 suites 1 1 n/a 0 0 00:04:49.009 tests 1 1 1 0 0 00:04:49.009 asserts 15 15 15 0 n/a 00:04:49.009 00:04:49.009 Elapsed time = 0.074 seconds 00:04:49.268 00:04:49.268 real 0m0.274s 00:04:49.268 user 0m0.109s 00:04:49.268 sys 0m0.063s 00:04:49.268 23:48:55 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:49.268 23:48:55 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:49.268 ************************************ 00:04:49.268 END TEST env_mem_callbacks 00:04:49.268 ************************************ 00:04:49.268 00:04:49.268 real 0m7.575s 00:04:49.268 user 0m6.096s 00:04:49.268 sys 0m1.100s 00:04:49.268 23:48:55 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:49.268 23:48:55 env -- common/autotest_common.sh@10 -- # set +x 00:04:49.268 ************************************ 00:04:49.268 END TEST env 00:04:49.268 ************************************ 00:04:49.268 23:48:55 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:49.269 23:48:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:49.269 23:48:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:49.269 23:48:55 -- common/autotest_common.sh@10 -- # set +x 00:04:49.269 ************************************ 00:04:49.269 START TEST rpc 00:04:49.269 ************************************ 00:04:49.269 23:48:55 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:49.269 * Looking for test storage... 00:04:49.269 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:49.269 23:48:55 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:49.269 23:48:55 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:49.269 23:48:55 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:49.528 23:48:55 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:49.528 23:48:55 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:49.528 23:48:55 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:49.528 23:48:55 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:49.528 23:48:55 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:49.528 23:48:55 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:49.528 23:48:55 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:49.528 23:48:55 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:49.528 23:48:55 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:49.528 23:48:55 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:49.528 23:48:55 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:49.528 23:48:55 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:49.528 23:48:55 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:49.528 23:48:55 rpc -- scripts/common.sh@345 -- # : 1 00:04:49.528 23:48:55 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:49.528 23:48:55 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:49.528 23:48:55 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:49.528 23:48:55 rpc -- scripts/common.sh@353 -- # local d=1 00:04:49.528 23:48:55 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:49.528 23:48:55 rpc -- scripts/common.sh@355 -- # echo 1 00:04:49.528 23:48:55 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:49.528 23:48:55 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:49.528 23:48:55 rpc -- scripts/common.sh@353 -- # local d=2 00:04:49.528 23:48:55 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:49.528 23:48:55 rpc -- scripts/common.sh@355 -- # echo 2 00:04:49.528 23:48:55 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:49.528 23:48:55 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:49.528 23:48:55 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:49.528 23:48:55 rpc -- scripts/common.sh@368 -- # return 0 00:04:49.528 23:48:55 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:49.528 23:48:55 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:49.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.528 --rc genhtml_branch_coverage=1 00:04:49.528 --rc genhtml_function_coverage=1 00:04:49.528 --rc genhtml_legend=1 00:04:49.528 --rc geninfo_all_blocks=1 00:04:49.528 --rc geninfo_unexecuted_blocks=1 00:04:49.528 00:04:49.528 ' 00:04:49.528 23:48:55 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:49.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.528 --rc genhtml_branch_coverage=1 00:04:49.528 --rc genhtml_function_coverage=1 00:04:49.528 --rc genhtml_legend=1 00:04:49.528 --rc geninfo_all_blocks=1 00:04:49.528 --rc geninfo_unexecuted_blocks=1 00:04:49.528 00:04:49.528 ' 00:04:49.528 23:48:55 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:49.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.528 --rc genhtml_branch_coverage=1 00:04:49.528 --rc genhtml_function_coverage=1 00:04:49.528 --rc genhtml_legend=1 00:04:49.528 --rc geninfo_all_blocks=1 00:04:49.528 --rc geninfo_unexecuted_blocks=1 00:04:49.528 00:04:49.528 ' 00:04:49.528 23:48:55 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:49.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.528 --rc genhtml_branch_coverage=1 00:04:49.528 --rc genhtml_function_coverage=1 00:04:49.528 --rc genhtml_legend=1 00:04:49.528 --rc geninfo_all_blocks=1 00:04:49.528 --rc geninfo_unexecuted_blocks=1 00:04:49.528 00:04:49.528 ' 00:04:49.528 23:48:55 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57459 00:04:49.528 23:48:55 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:49.528 23:48:55 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:49.528 23:48:55 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57459 00:04:49.528 23:48:55 rpc -- common/autotest_common.sh@835 -- # '[' -z 57459 ']' 00:04:49.528 23:48:55 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:49.529 23:48:55 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:49.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:49.529 23:48:55 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:49.529 23:48:55 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:49.529 23:48:55 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:49.529 [2024-11-18 23:48:56.126594] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:04:49.529 [2024-11-18 23:48:56.126787] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57459 ] 00:04:49.788 [2024-11-18 23:48:56.313421] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:49.788 [2024-11-18 23:48:56.438054] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:49.788 [2024-11-18 23:48:56.438133] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57459' to capture a snapshot of events at runtime. 00:04:49.788 [2024-11-18 23:48:56.438154] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:49.788 [2024-11-18 23:48:56.438172] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:49.788 [2024-11-18 23:48:56.438187] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57459 for offline analysis/debug. 00:04:49.788 [2024-11-18 23:48:56.439956] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.047 [2024-11-18 23:48:56.690666] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:50.616 23:48:57 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:50.616 23:48:57 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:50.616 23:48:57 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:50.616 23:48:57 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:50.616 23:48:57 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:50.616 23:48:57 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:50.616 23:48:57 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:50.616 23:48:57 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:50.616 23:48:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:50.616 ************************************ 00:04:50.616 START TEST rpc_integrity 00:04:50.616 ************************************ 00:04:50.616 23:48:57 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:50.616 23:48:57 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:50.616 23:48:57 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:50.616 23:48:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:50.616 23:48:57 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:50.616 23:48:57 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:50.616 23:48:57 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:50.616 23:48:57 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:50.616 23:48:57 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:50.616 23:48:57 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:50.616 23:48:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:50.616 23:48:57 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:50.616 23:48:57 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:50.616 23:48:57 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:50.616 23:48:57 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:50.616 23:48:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:50.616 23:48:57 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:50.616 23:48:57 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:50.616 { 00:04:50.616 "name": "Malloc0", 00:04:50.616 "aliases": [ 00:04:50.616 "486efdcd-e7c3-4403-9283-70e3fd96e413" 00:04:50.616 ], 00:04:50.616 "product_name": "Malloc disk", 00:04:50.616 "block_size": 512, 00:04:50.616 "num_blocks": 16384, 00:04:50.616 "uuid": "486efdcd-e7c3-4403-9283-70e3fd96e413", 00:04:50.616 "assigned_rate_limits": { 00:04:50.616 "rw_ios_per_sec": 0, 00:04:50.616 "rw_mbytes_per_sec": 0, 00:04:50.616 "r_mbytes_per_sec": 0, 00:04:50.616 "w_mbytes_per_sec": 0 00:04:50.616 }, 00:04:50.616 "claimed": false, 00:04:50.616 "zoned": false, 00:04:50.616 "supported_io_types": { 00:04:50.616 "read": true, 00:04:50.616 "write": true, 00:04:50.616 "unmap": true, 00:04:50.616 "flush": true, 00:04:50.616 "reset": true, 00:04:50.616 "nvme_admin": false, 00:04:50.616 "nvme_io": false, 00:04:50.616 "nvme_io_md": false, 00:04:50.616 "write_zeroes": true, 00:04:50.616 "zcopy": true, 00:04:50.616 "get_zone_info": false, 00:04:50.616 "zone_management": false, 00:04:50.616 "zone_append": false, 00:04:50.616 "compare": false, 00:04:50.616 "compare_and_write": false, 00:04:50.616 "abort": true, 00:04:50.616 "seek_hole": false, 00:04:50.616 "seek_data": false, 00:04:50.616 "copy": true, 00:04:50.616 "nvme_iov_md": false 00:04:50.616 }, 00:04:50.616 "memory_domains": [ 00:04:50.616 { 00:04:50.616 "dma_device_id": "system", 00:04:50.616 "dma_device_type": 1 00:04:50.616 }, 00:04:50.616 { 00:04:50.616 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:50.616 "dma_device_type": 2 00:04:50.616 } 00:04:50.616 ], 00:04:50.616 "driver_specific": {} 00:04:50.616 } 00:04:50.616 ]' 00:04:50.616 23:48:57 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:50.877 23:48:57 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:50.877 23:48:57 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:50.877 23:48:57 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:50.877 23:48:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:50.877 [2024-11-18 23:48:57.328111] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:50.877 [2024-11-18 23:48:57.328384] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:50.877 [2024-11-18 23:48:57.328439] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:04:50.877 [2024-11-18 23:48:57.328456] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:50.877 [2024-11-18 23:48:57.331169] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:50.877 [2024-11-18 23:48:57.331209] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:50.877 Passthru0 00:04:50.877 23:48:57 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:50.877 23:48:57 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:50.877 23:48:57 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:50.877 23:48:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:50.877 23:48:57 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:50.877 23:48:57 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:50.877 { 00:04:50.877 "name": "Malloc0", 00:04:50.877 "aliases": [ 00:04:50.877 "486efdcd-e7c3-4403-9283-70e3fd96e413" 00:04:50.877 ], 00:04:50.877 "product_name": "Malloc disk", 00:04:50.877 "block_size": 512, 00:04:50.877 "num_blocks": 16384, 00:04:50.877 "uuid": "486efdcd-e7c3-4403-9283-70e3fd96e413", 00:04:50.877 "assigned_rate_limits": { 00:04:50.877 "rw_ios_per_sec": 0, 00:04:50.877 "rw_mbytes_per_sec": 0, 00:04:50.877 "r_mbytes_per_sec": 0, 00:04:50.877 "w_mbytes_per_sec": 0 00:04:50.877 }, 00:04:50.877 "claimed": true, 00:04:50.877 "claim_type": "exclusive_write", 00:04:50.877 "zoned": false, 00:04:50.877 "supported_io_types": { 00:04:50.877 "read": true, 00:04:50.877 "write": true, 00:04:50.877 "unmap": true, 00:04:50.877 "flush": true, 00:04:50.877 "reset": true, 00:04:50.877 "nvme_admin": false, 00:04:50.877 "nvme_io": false, 00:04:50.877 "nvme_io_md": false, 00:04:50.877 "write_zeroes": true, 00:04:50.877 "zcopy": true, 00:04:50.877 "get_zone_info": false, 00:04:50.877 "zone_management": false, 00:04:50.877 "zone_append": false, 00:04:50.877 "compare": false, 00:04:50.877 "compare_and_write": false, 00:04:50.877 "abort": true, 00:04:50.877 "seek_hole": false, 00:04:50.877 "seek_data": false, 00:04:50.877 "copy": true, 00:04:50.877 "nvme_iov_md": false 00:04:50.877 }, 00:04:50.877 "memory_domains": [ 00:04:50.877 { 00:04:50.877 "dma_device_id": "system", 00:04:50.877 "dma_device_type": 1 00:04:50.877 }, 00:04:50.877 { 00:04:50.877 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:50.877 "dma_device_type": 2 00:04:50.877 } 00:04:50.877 ], 00:04:50.877 "driver_specific": {} 00:04:50.877 }, 00:04:50.877 { 00:04:50.877 "name": "Passthru0", 00:04:50.877 "aliases": [ 00:04:50.877 "06965038-2987-5fc2-b601-9120bba8549c" 00:04:50.877 ], 00:04:50.877 "product_name": "passthru", 00:04:50.877 "block_size": 512, 00:04:50.877 "num_blocks": 16384, 00:04:50.877 "uuid": "06965038-2987-5fc2-b601-9120bba8549c", 00:04:50.877 "assigned_rate_limits": { 00:04:50.877 "rw_ios_per_sec": 0, 00:04:50.877 "rw_mbytes_per_sec": 0, 00:04:50.877 "r_mbytes_per_sec": 0, 00:04:50.877 "w_mbytes_per_sec": 0 00:04:50.877 }, 00:04:50.877 "claimed": false, 00:04:50.877 "zoned": false, 00:04:50.877 "supported_io_types": { 00:04:50.877 "read": true, 00:04:50.877 "write": true, 00:04:50.877 "unmap": true, 00:04:50.877 "flush": true, 00:04:50.877 "reset": true, 00:04:50.877 "nvme_admin": false, 00:04:50.877 "nvme_io": false, 00:04:50.877 "nvme_io_md": false, 00:04:50.877 "write_zeroes": true, 00:04:50.877 "zcopy": true, 00:04:50.877 "get_zone_info": false, 00:04:50.877 "zone_management": false, 00:04:50.877 "zone_append": false, 00:04:50.877 "compare": false, 00:04:50.877 "compare_and_write": false, 00:04:50.877 "abort": true, 00:04:50.877 "seek_hole": false, 00:04:50.877 "seek_data": false, 00:04:50.877 "copy": true, 00:04:50.877 "nvme_iov_md": false 00:04:50.877 }, 00:04:50.877 "memory_domains": [ 00:04:50.877 { 00:04:50.877 "dma_device_id": "system", 00:04:50.877 "dma_device_type": 1 00:04:50.877 }, 00:04:50.877 { 00:04:50.877 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:50.877 "dma_device_type": 2 00:04:50.877 } 00:04:50.877 ], 00:04:50.877 "driver_specific": { 00:04:50.877 "passthru": { 00:04:50.877 "name": "Passthru0", 00:04:50.877 "base_bdev_name": "Malloc0" 00:04:50.877 } 00:04:50.877 } 00:04:50.877 } 00:04:50.877 ]' 00:04:50.877 23:48:57 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:50.877 23:48:57 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:50.877 23:48:57 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:50.877 23:48:57 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:50.877 23:48:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:50.877 23:48:57 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:50.877 23:48:57 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:50.877 23:48:57 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:50.877 23:48:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:50.877 23:48:57 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:50.877 23:48:57 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:50.877 23:48:57 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:50.877 23:48:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:50.877 23:48:57 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:50.877 23:48:57 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:50.877 23:48:57 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:50.877 ************************************ 00:04:50.877 END TEST rpc_integrity 00:04:50.877 ************************************ 00:04:50.877 23:48:57 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:50.877 00:04:50.877 real 0m0.350s 00:04:50.877 user 0m0.216s 00:04:50.877 sys 0m0.042s 00:04:50.877 23:48:57 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:50.877 23:48:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:50.877 23:48:57 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:50.877 23:48:57 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:50.877 23:48:57 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:50.877 23:48:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:51.137 ************************************ 00:04:51.137 START TEST rpc_plugins 00:04:51.137 ************************************ 00:04:51.137 23:48:57 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:51.137 23:48:57 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:51.137 23:48:57 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:51.137 23:48:57 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:51.137 23:48:57 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:51.137 23:48:57 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:51.137 23:48:57 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:51.137 23:48:57 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:51.137 23:48:57 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:51.137 23:48:57 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:51.137 23:48:57 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:51.137 { 00:04:51.137 "name": "Malloc1", 00:04:51.137 "aliases": [ 00:04:51.137 "6b0c7869-bf87-44dc-9027-f68a84b741ea" 00:04:51.137 ], 00:04:51.137 "product_name": "Malloc disk", 00:04:51.137 "block_size": 4096, 00:04:51.137 "num_blocks": 256, 00:04:51.137 "uuid": "6b0c7869-bf87-44dc-9027-f68a84b741ea", 00:04:51.137 "assigned_rate_limits": { 00:04:51.137 "rw_ios_per_sec": 0, 00:04:51.137 "rw_mbytes_per_sec": 0, 00:04:51.137 "r_mbytes_per_sec": 0, 00:04:51.137 "w_mbytes_per_sec": 0 00:04:51.137 }, 00:04:51.137 "claimed": false, 00:04:51.137 "zoned": false, 00:04:51.137 "supported_io_types": { 00:04:51.137 "read": true, 00:04:51.137 "write": true, 00:04:51.137 "unmap": true, 00:04:51.137 "flush": true, 00:04:51.137 "reset": true, 00:04:51.137 "nvme_admin": false, 00:04:51.137 "nvme_io": false, 00:04:51.137 "nvme_io_md": false, 00:04:51.137 "write_zeroes": true, 00:04:51.137 "zcopy": true, 00:04:51.137 "get_zone_info": false, 00:04:51.137 "zone_management": false, 00:04:51.137 "zone_append": false, 00:04:51.137 "compare": false, 00:04:51.137 "compare_and_write": false, 00:04:51.137 "abort": true, 00:04:51.137 "seek_hole": false, 00:04:51.137 "seek_data": false, 00:04:51.137 "copy": true, 00:04:51.137 "nvme_iov_md": false 00:04:51.137 }, 00:04:51.137 "memory_domains": [ 00:04:51.137 { 00:04:51.137 "dma_device_id": "system", 00:04:51.137 "dma_device_type": 1 00:04:51.137 }, 00:04:51.137 { 00:04:51.137 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:51.137 "dma_device_type": 2 00:04:51.137 } 00:04:51.137 ], 00:04:51.137 "driver_specific": {} 00:04:51.137 } 00:04:51.137 ]' 00:04:51.137 23:48:57 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:51.137 23:48:57 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:51.137 23:48:57 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:51.137 23:48:57 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:51.137 23:48:57 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:51.137 23:48:57 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:51.137 23:48:57 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:51.138 23:48:57 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:51.138 23:48:57 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:51.138 23:48:57 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:51.138 23:48:57 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:51.138 23:48:57 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:51.138 ************************************ 00:04:51.138 END TEST rpc_plugins 00:04:51.138 ************************************ 00:04:51.138 23:48:57 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:51.138 00:04:51.138 real 0m0.163s 00:04:51.138 user 0m0.108s 00:04:51.138 sys 0m0.020s 00:04:51.138 23:48:57 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:51.138 23:48:57 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:51.138 23:48:57 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:51.138 23:48:57 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:51.138 23:48:57 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:51.138 23:48:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:51.138 ************************************ 00:04:51.138 START TEST rpc_trace_cmd_test 00:04:51.138 ************************************ 00:04:51.138 23:48:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:51.138 23:48:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:51.138 23:48:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:51.138 23:48:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:51.138 23:48:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:51.138 23:48:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:51.138 23:48:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:51.138 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57459", 00:04:51.138 "tpoint_group_mask": "0x8", 00:04:51.138 "iscsi_conn": { 00:04:51.138 "mask": "0x2", 00:04:51.138 "tpoint_mask": "0x0" 00:04:51.138 }, 00:04:51.138 "scsi": { 00:04:51.138 "mask": "0x4", 00:04:51.138 "tpoint_mask": "0x0" 00:04:51.138 }, 00:04:51.138 "bdev": { 00:04:51.138 "mask": "0x8", 00:04:51.138 "tpoint_mask": "0xffffffffffffffff" 00:04:51.138 }, 00:04:51.138 "nvmf_rdma": { 00:04:51.138 "mask": "0x10", 00:04:51.138 "tpoint_mask": "0x0" 00:04:51.138 }, 00:04:51.138 "nvmf_tcp": { 00:04:51.138 "mask": "0x20", 00:04:51.138 "tpoint_mask": "0x0" 00:04:51.138 }, 00:04:51.138 "ftl": { 00:04:51.138 "mask": "0x40", 00:04:51.138 "tpoint_mask": "0x0" 00:04:51.138 }, 00:04:51.138 "blobfs": { 00:04:51.138 "mask": "0x80", 00:04:51.138 "tpoint_mask": "0x0" 00:04:51.138 }, 00:04:51.138 "dsa": { 00:04:51.138 "mask": "0x200", 00:04:51.138 "tpoint_mask": "0x0" 00:04:51.138 }, 00:04:51.138 "thread": { 00:04:51.138 "mask": "0x400", 00:04:51.138 "tpoint_mask": "0x0" 00:04:51.138 }, 00:04:51.138 "nvme_pcie": { 00:04:51.138 "mask": "0x800", 00:04:51.138 "tpoint_mask": "0x0" 00:04:51.138 }, 00:04:51.138 "iaa": { 00:04:51.138 "mask": "0x1000", 00:04:51.138 "tpoint_mask": "0x0" 00:04:51.138 }, 00:04:51.138 "nvme_tcp": { 00:04:51.138 "mask": "0x2000", 00:04:51.138 "tpoint_mask": "0x0" 00:04:51.138 }, 00:04:51.138 "bdev_nvme": { 00:04:51.138 "mask": "0x4000", 00:04:51.138 "tpoint_mask": "0x0" 00:04:51.138 }, 00:04:51.138 "sock": { 00:04:51.138 "mask": "0x8000", 00:04:51.138 "tpoint_mask": "0x0" 00:04:51.138 }, 00:04:51.138 "blob": { 00:04:51.138 "mask": "0x10000", 00:04:51.138 "tpoint_mask": "0x0" 00:04:51.138 }, 00:04:51.138 "bdev_raid": { 00:04:51.138 "mask": "0x20000", 00:04:51.138 "tpoint_mask": "0x0" 00:04:51.138 }, 00:04:51.138 "scheduler": { 00:04:51.138 "mask": "0x40000", 00:04:51.138 "tpoint_mask": "0x0" 00:04:51.138 } 00:04:51.138 }' 00:04:51.138 23:48:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:51.397 23:48:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:51.397 23:48:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:51.397 23:48:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:51.397 23:48:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:51.397 23:48:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:51.397 23:48:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:51.397 23:48:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:51.397 23:48:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:51.397 ************************************ 00:04:51.397 END TEST rpc_trace_cmd_test 00:04:51.397 ************************************ 00:04:51.397 23:48:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:51.397 00:04:51.397 real 0m0.283s 00:04:51.397 user 0m0.239s 00:04:51.397 sys 0m0.036s 00:04:51.397 23:48:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:51.397 23:48:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:51.656 23:48:58 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:51.656 23:48:58 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:51.656 23:48:58 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:51.656 23:48:58 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:51.656 23:48:58 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:51.656 23:48:58 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:51.656 ************************************ 00:04:51.656 START TEST rpc_daemon_integrity 00:04:51.656 ************************************ 00:04:51.656 23:48:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:51.656 23:48:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:51.656 23:48:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:51.656 23:48:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:51.656 23:48:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:51.656 23:48:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:51.657 23:48:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:51.657 23:48:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:51.657 23:48:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:51.657 23:48:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:51.657 23:48:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:51.657 23:48:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:51.657 23:48:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:51.657 23:48:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:51.657 23:48:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:51.657 23:48:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:51.657 23:48:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:51.657 23:48:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:51.657 { 00:04:51.657 "name": "Malloc2", 00:04:51.657 "aliases": [ 00:04:51.657 "fa95425c-886f-45f9-bfd7-229376dc5246" 00:04:51.657 ], 00:04:51.657 "product_name": "Malloc disk", 00:04:51.657 "block_size": 512, 00:04:51.657 "num_blocks": 16384, 00:04:51.657 "uuid": "fa95425c-886f-45f9-bfd7-229376dc5246", 00:04:51.657 "assigned_rate_limits": { 00:04:51.657 "rw_ios_per_sec": 0, 00:04:51.657 "rw_mbytes_per_sec": 0, 00:04:51.657 "r_mbytes_per_sec": 0, 00:04:51.657 "w_mbytes_per_sec": 0 00:04:51.657 }, 00:04:51.657 "claimed": false, 00:04:51.657 "zoned": false, 00:04:51.657 "supported_io_types": { 00:04:51.657 "read": true, 00:04:51.657 "write": true, 00:04:51.657 "unmap": true, 00:04:51.657 "flush": true, 00:04:51.657 "reset": true, 00:04:51.657 "nvme_admin": false, 00:04:51.657 "nvme_io": false, 00:04:51.657 "nvme_io_md": false, 00:04:51.657 "write_zeroes": true, 00:04:51.657 "zcopy": true, 00:04:51.657 "get_zone_info": false, 00:04:51.657 "zone_management": false, 00:04:51.657 "zone_append": false, 00:04:51.657 "compare": false, 00:04:51.657 "compare_and_write": false, 00:04:51.657 "abort": true, 00:04:51.657 "seek_hole": false, 00:04:51.657 "seek_data": false, 00:04:51.657 "copy": true, 00:04:51.657 "nvme_iov_md": false 00:04:51.657 }, 00:04:51.657 "memory_domains": [ 00:04:51.657 { 00:04:51.657 "dma_device_id": "system", 00:04:51.657 "dma_device_type": 1 00:04:51.657 }, 00:04:51.657 { 00:04:51.657 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:51.657 "dma_device_type": 2 00:04:51.657 } 00:04:51.657 ], 00:04:51.657 "driver_specific": {} 00:04:51.657 } 00:04:51.657 ]' 00:04:51.657 23:48:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:51.657 23:48:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:51.657 23:48:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:51.657 23:48:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:51.657 23:48:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:51.657 [2024-11-18 23:48:58.287968] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:51.657 [2024-11-18 23:48:58.288074] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:51.657 [2024-11-18 23:48:58.288109] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:04:51.657 [2024-11-18 23:48:58.288123] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:51.657 [2024-11-18 23:48:58.290750] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:51.657 [2024-11-18 23:48:58.290793] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:51.657 Passthru0 00:04:51.657 23:48:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:51.657 23:48:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:51.657 23:48:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:51.657 23:48:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:51.657 23:48:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:51.657 23:48:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:51.657 { 00:04:51.657 "name": "Malloc2", 00:04:51.657 "aliases": [ 00:04:51.657 "fa95425c-886f-45f9-bfd7-229376dc5246" 00:04:51.657 ], 00:04:51.657 "product_name": "Malloc disk", 00:04:51.657 "block_size": 512, 00:04:51.657 "num_blocks": 16384, 00:04:51.657 "uuid": "fa95425c-886f-45f9-bfd7-229376dc5246", 00:04:51.657 "assigned_rate_limits": { 00:04:51.657 "rw_ios_per_sec": 0, 00:04:51.657 "rw_mbytes_per_sec": 0, 00:04:51.657 "r_mbytes_per_sec": 0, 00:04:51.657 "w_mbytes_per_sec": 0 00:04:51.657 }, 00:04:51.657 "claimed": true, 00:04:51.657 "claim_type": "exclusive_write", 00:04:51.657 "zoned": false, 00:04:51.657 "supported_io_types": { 00:04:51.657 "read": true, 00:04:51.657 "write": true, 00:04:51.657 "unmap": true, 00:04:51.657 "flush": true, 00:04:51.657 "reset": true, 00:04:51.657 "nvme_admin": false, 00:04:51.657 "nvme_io": false, 00:04:51.657 "nvme_io_md": false, 00:04:51.657 "write_zeroes": true, 00:04:51.657 "zcopy": true, 00:04:51.657 "get_zone_info": false, 00:04:51.657 "zone_management": false, 00:04:51.657 "zone_append": false, 00:04:51.657 "compare": false, 00:04:51.657 "compare_and_write": false, 00:04:51.657 "abort": true, 00:04:51.657 "seek_hole": false, 00:04:51.657 "seek_data": false, 00:04:51.657 "copy": true, 00:04:51.657 "nvme_iov_md": false 00:04:51.657 }, 00:04:51.657 "memory_domains": [ 00:04:51.657 { 00:04:51.657 "dma_device_id": "system", 00:04:51.657 "dma_device_type": 1 00:04:51.657 }, 00:04:51.657 { 00:04:51.657 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:51.657 "dma_device_type": 2 00:04:51.657 } 00:04:51.657 ], 00:04:51.657 "driver_specific": {} 00:04:51.657 }, 00:04:51.657 { 00:04:51.657 "name": "Passthru0", 00:04:51.657 "aliases": [ 00:04:51.657 "490a670a-0c6d-50e2-80b4-d82837fe20d7" 00:04:51.657 ], 00:04:51.657 "product_name": "passthru", 00:04:51.657 "block_size": 512, 00:04:51.657 "num_blocks": 16384, 00:04:51.657 "uuid": "490a670a-0c6d-50e2-80b4-d82837fe20d7", 00:04:51.657 "assigned_rate_limits": { 00:04:51.657 "rw_ios_per_sec": 0, 00:04:51.657 "rw_mbytes_per_sec": 0, 00:04:51.657 "r_mbytes_per_sec": 0, 00:04:51.657 "w_mbytes_per_sec": 0 00:04:51.657 }, 00:04:51.657 "claimed": false, 00:04:51.657 "zoned": false, 00:04:51.657 "supported_io_types": { 00:04:51.657 "read": true, 00:04:51.657 "write": true, 00:04:51.657 "unmap": true, 00:04:51.657 "flush": true, 00:04:51.657 "reset": true, 00:04:51.657 "nvme_admin": false, 00:04:51.657 "nvme_io": false, 00:04:51.657 "nvme_io_md": false, 00:04:51.657 "write_zeroes": true, 00:04:51.657 "zcopy": true, 00:04:51.657 "get_zone_info": false, 00:04:51.657 "zone_management": false, 00:04:51.657 "zone_append": false, 00:04:51.657 "compare": false, 00:04:51.657 "compare_and_write": false, 00:04:51.657 "abort": true, 00:04:51.657 "seek_hole": false, 00:04:51.657 "seek_data": false, 00:04:51.657 "copy": true, 00:04:51.657 "nvme_iov_md": false 00:04:51.657 }, 00:04:51.657 "memory_domains": [ 00:04:51.657 { 00:04:51.657 "dma_device_id": "system", 00:04:51.657 "dma_device_type": 1 00:04:51.657 }, 00:04:51.657 { 00:04:51.657 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:51.657 "dma_device_type": 2 00:04:51.657 } 00:04:51.657 ], 00:04:51.657 "driver_specific": { 00:04:51.657 "passthru": { 00:04:51.657 "name": "Passthru0", 00:04:51.657 "base_bdev_name": "Malloc2" 00:04:51.657 } 00:04:51.657 } 00:04:51.657 } 00:04:51.657 ]' 00:04:51.657 23:48:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:51.917 23:48:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:51.917 23:48:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:51.917 23:48:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:51.917 23:48:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:51.917 23:48:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:51.917 23:48:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:51.917 23:48:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:51.917 23:48:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:51.917 23:48:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:51.917 23:48:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:51.917 23:48:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:51.917 23:48:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:51.917 23:48:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:51.917 23:48:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:51.917 23:48:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:51.917 ************************************ 00:04:51.917 END TEST rpc_daemon_integrity 00:04:51.917 ************************************ 00:04:51.917 23:48:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:51.917 00:04:51.917 real 0m0.357s 00:04:51.917 user 0m0.226s 00:04:51.917 sys 0m0.046s 00:04:51.917 23:48:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:51.917 23:48:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:51.917 23:48:58 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:51.917 23:48:58 rpc -- rpc/rpc.sh@84 -- # killprocess 57459 00:04:51.917 23:48:58 rpc -- common/autotest_common.sh@954 -- # '[' -z 57459 ']' 00:04:51.917 23:48:58 rpc -- common/autotest_common.sh@958 -- # kill -0 57459 00:04:51.917 23:48:58 rpc -- common/autotest_common.sh@959 -- # uname 00:04:51.917 23:48:58 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:51.917 23:48:58 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57459 00:04:51.917 killing process with pid 57459 00:04:51.917 23:48:58 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:51.917 23:48:58 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:51.917 23:48:58 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57459' 00:04:51.917 23:48:58 rpc -- common/autotest_common.sh@973 -- # kill 57459 00:04:51.917 23:48:58 rpc -- common/autotest_common.sh@978 -- # wait 57459 00:04:53.822 00:04:53.822 real 0m4.484s 00:04:53.822 user 0m5.291s 00:04:53.822 sys 0m0.767s 00:04:53.822 23:49:00 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:53.822 23:49:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:53.822 ************************************ 00:04:53.822 END TEST rpc 00:04:53.822 ************************************ 00:04:53.822 23:49:00 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:53.822 23:49:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:53.822 23:49:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:53.822 23:49:00 -- common/autotest_common.sh@10 -- # set +x 00:04:53.822 ************************************ 00:04:53.822 START TEST skip_rpc 00:04:53.822 ************************************ 00:04:53.822 23:49:00 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:53.822 * Looking for test storage... 00:04:53.822 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:53.822 23:49:00 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:53.822 23:49:00 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:53.822 23:49:00 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:53.822 23:49:00 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:53.822 23:49:00 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:53.822 23:49:00 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:53.822 23:49:00 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:53.822 23:49:00 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:53.822 23:49:00 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:53.822 23:49:00 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:53.822 23:49:00 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:53.822 23:49:00 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:53.822 23:49:00 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:53.822 23:49:00 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:53.822 23:49:00 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:53.822 23:49:00 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:53.822 23:49:00 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:53.822 23:49:00 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:53.822 23:49:00 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:54.095 23:49:00 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:54.095 23:49:00 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:54.095 23:49:00 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:54.095 23:49:00 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:54.095 23:49:00 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:54.095 23:49:00 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:54.095 23:49:00 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:54.095 23:49:00 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:54.095 23:49:00 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:54.095 23:49:00 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:54.095 23:49:00 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:54.095 23:49:00 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:54.095 23:49:00 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:54.095 23:49:00 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:54.095 23:49:00 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:54.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.095 --rc genhtml_branch_coverage=1 00:04:54.095 --rc genhtml_function_coverage=1 00:04:54.095 --rc genhtml_legend=1 00:04:54.095 --rc geninfo_all_blocks=1 00:04:54.095 --rc geninfo_unexecuted_blocks=1 00:04:54.095 00:04:54.095 ' 00:04:54.095 23:49:00 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:54.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.095 --rc genhtml_branch_coverage=1 00:04:54.095 --rc genhtml_function_coverage=1 00:04:54.095 --rc genhtml_legend=1 00:04:54.095 --rc geninfo_all_blocks=1 00:04:54.095 --rc geninfo_unexecuted_blocks=1 00:04:54.095 00:04:54.095 ' 00:04:54.095 23:49:00 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:54.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.095 --rc genhtml_branch_coverage=1 00:04:54.095 --rc genhtml_function_coverage=1 00:04:54.095 --rc genhtml_legend=1 00:04:54.095 --rc geninfo_all_blocks=1 00:04:54.095 --rc geninfo_unexecuted_blocks=1 00:04:54.095 00:04:54.095 ' 00:04:54.095 23:49:00 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:54.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.095 --rc genhtml_branch_coverage=1 00:04:54.095 --rc genhtml_function_coverage=1 00:04:54.095 --rc genhtml_legend=1 00:04:54.095 --rc geninfo_all_blocks=1 00:04:54.095 --rc geninfo_unexecuted_blocks=1 00:04:54.095 00:04:54.095 ' 00:04:54.095 23:49:00 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:54.095 23:49:00 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:54.095 23:49:00 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:54.095 23:49:00 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:54.095 23:49:00 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:54.095 23:49:00 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:54.095 ************************************ 00:04:54.095 START TEST skip_rpc 00:04:54.095 ************************************ 00:04:54.095 23:49:00 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:54.095 23:49:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57683 00:04:54.095 23:49:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:54.095 23:49:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:54.095 23:49:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:54.095 [2024-11-18 23:49:00.668657] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:04:54.095 [2024-11-18 23:49:00.669067] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57683 ] 00:04:54.386 [2024-11-18 23:49:00.847357] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:54.386 [2024-11-18 23:49:00.931403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.659 [2024-11-18 23:49:01.119496] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:58.857 23:49:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:58.857 23:49:05 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:58.857 23:49:05 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:58.857 23:49:05 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:58.857 23:49:05 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:58.857 23:49:05 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:58.857 23:49:05 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:58.857 23:49:05 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:58.857 23:49:05 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:58.857 23:49:05 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:59.116 23:49:05 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:59.116 23:49:05 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:59.116 23:49:05 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:59.116 23:49:05 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:59.116 23:49:05 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:59.117 23:49:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:59.117 23:49:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57683 00:04:59.117 23:49:05 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 57683 ']' 00:04:59.117 23:49:05 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 57683 00:04:59.117 23:49:05 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:59.117 23:49:05 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:59.117 23:49:05 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57683 00:04:59.117 23:49:05 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:59.117 23:49:05 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:59.117 23:49:05 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57683' 00:04:59.117 killing process with pid 57683 00:04:59.117 23:49:05 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 57683 00:04:59.117 23:49:05 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 57683 00:05:01.024 00:05:01.024 real 0m6.775s 00:05:01.024 user 0m6.379s 00:05:01.024 sys 0m0.300s 00:05:01.024 23:49:07 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:01.024 23:49:07 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:01.024 ************************************ 00:05:01.024 END TEST skip_rpc 00:05:01.024 ************************************ 00:05:01.024 23:49:07 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:01.024 23:49:07 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:01.024 23:49:07 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:01.024 23:49:07 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:01.024 ************************************ 00:05:01.024 START TEST skip_rpc_with_json 00:05:01.024 ************************************ 00:05:01.024 23:49:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:05:01.024 23:49:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:01.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:01.024 23:49:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57781 00:05:01.024 23:49:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:01.024 23:49:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:01.024 23:49:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57781 00:05:01.024 23:49:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57781 ']' 00:05:01.024 23:49:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:01.024 23:49:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:01.024 23:49:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:01.024 23:49:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:01.024 23:49:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:01.024 [2024-11-18 23:49:07.469300] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:05:01.024 [2024-11-18 23:49:07.469670] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57781 ] 00:05:01.024 [2024-11-18 23:49:07.631815] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.283 [2024-11-18 23:49:07.722754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.283 [2024-11-18 23:49:07.904578] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:01.851 23:49:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:01.851 23:49:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:05:01.851 23:49:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:01.851 23:49:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:01.851 23:49:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:01.851 [2024-11-18 23:49:08.389453] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:01.851 request: 00:05:01.851 { 00:05:01.851 "trtype": "tcp", 00:05:01.851 "method": "nvmf_get_transports", 00:05:01.851 "req_id": 1 00:05:01.851 } 00:05:01.851 Got JSON-RPC error response 00:05:01.851 response: 00:05:01.851 { 00:05:01.851 "code": -19, 00:05:01.851 "message": "No such device" 00:05:01.851 } 00:05:01.851 23:49:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:01.851 23:49:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:01.851 23:49:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:01.851 23:49:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:01.851 [2024-11-18 23:49:08.401574] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:01.851 23:49:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:01.851 23:49:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:01.851 23:49:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:01.851 23:49:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:02.110 23:49:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:02.110 23:49:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:02.110 { 00:05:02.110 "subsystems": [ 00:05:02.110 { 00:05:02.110 "subsystem": "fsdev", 00:05:02.110 "config": [ 00:05:02.110 { 00:05:02.110 "method": "fsdev_set_opts", 00:05:02.110 "params": { 00:05:02.110 "fsdev_io_pool_size": 65535, 00:05:02.110 "fsdev_io_cache_size": 256 00:05:02.110 } 00:05:02.110 } 00:05:02.110 ] 00:05:02.110 }, 00:05:02.110 { 00:05:02.110 "subsystem": "vfio_user_target", 00:05:02.110 "config": null 00:05:02.110 }, 00:05:02.110 { 00:05:02.110 "subsystem": "keyring", 00:05:02.110 "config": [] 00:05:02.110 }, 00:05:02.110 { 00:05:02.110 "subsystem": "iobuf", 00:05:02.110 "config": [ 00:05:02.110 { 00:05:02.111 "method": "iobuf_set_options", 00:05:02.111 "params": { 00:05:02.111 "small_pool_count": 8192, 00:05:02.111 "large_pool_count": 1024, 00:05:02.111 "small_bufsize": 8192, 00:05:02.111 "large_bufsize": 135168, 00:05:02.111 "enable_numa": false 00:05:02.111 } 00:05:02.111 } 00:05:02.111 ] 00:05:02.111 }, 00:05:02.111 { 00:05:02.111 "subsystem": "sock", 00:05:02.111 "config": [ 00:05:02.111 { 00:05:02.111 "method": "sock_set_default_impl", 00:05:02.111 "params": { 00:05:02.111 "impl_name": "uring" 00:05:02.111 } 00:05:02.111 }, 00:05:02.111 { 00:05:02.111 "method": "sock_impl_set_options", 00:05:02.111 "params": { 00:05:02.111 "impl_name": "ssl", 00:05:02.111 "recv_buf_size": 4096, 00:05:02.111 "send_buf_size": 4096, 00:05:02.111 "enable_recv_pipe": true, 00:05:02.111 "enable_quickack": false, 00:05:02.111 "enable_placement_id": 0, 00:05:02.111 "enable_zerocopy_send_server": true, 00:05:02.111 "enable_zerocopy_send_client": false, 00:05:02.111 "zerocopy_threshold": 0, 00:05:02.111 "tls_version": 0, 00:05:02.111 "enable_ktls": false 00:05:02.111 } 00:05:02.111 }, 00:05:02.111 { 00:05:02.111 "method": "sock_impl_set_options", 00:05:02.111 "params": { 00:05:02.111 "impl_name": "posix", 00:05:02.111 "recv_buf_size": 2097152, 00:05:02.111 "send_buf_size": 2097152, 00:05:02.111 "enable_recv_pipe": true, 00:05:02.111 "enable_quickack": false, 00:05:02.111 "enable_placement_id": 0, 00:05:02.111 "enable_zerocopy_send_server": true, 00:05:02.111 "enable_zerocopy_send_client": false, 00:05:02.111 "zerocopy_threshold": 0, 00:05:02.111 "tls_version": 0, 00:05:02.111 "enable_ktls": false 00:05:02.111 } 00:05:02.111 }, 00:05:02.111 { 00:05:02.111 "method": "sock_impl_set_options", 00:05:02.111 "params": { 00:05:02.111 "impl_name": "uring", 00:05:02.111 "recv_buf_size": 2097152, 00:05:02.111 "send_buf_size": 2097152, 00:05:02.111 "enable_recv_pipe": true, 00:05:02.111 "enable_quickack": false, 00:05:02.111 "enable_placement_id": 0, 00:05:02.111 "enable_zerocopy_send_server": false, 00:05:02.111 "enable_zerocopy_send_client": false, 00:05:02.111 "zerocopy_threshold": 0, 00:05:02.111 "tls_version": 0, 00:05:02.111 "enable_ktls": false 00:05:02.111 } 00:05:02.111 } 00:05:02.111 ] 00:05:02.111 }, 00:05:02.111 { 00:05:02.111 "subsystem": "vmd", 00:05:02.111 "config": [] 00:05:02.111 }, 00:05:02.111 { 00:05:02.111 "subsystem": "accel", 00:05:02.111 "config": [ 00:05:02.111 { 00:05:02.111 "method": "accel_set_options", 00:05:02.111 "params": { 00:05:02.111 "small_cache_size": 128, 00:05:02.111 "large_cache_size": 16, 00:05:02.111 "task_count": 2048, 00:05:02.111 "sequence_count": 2048, 00:05:02.111 "buf_count": 2048 00:05:02.111 } 00:05:02.111 } 00:05:02.111 ] 00:05:02.111 }, 00:05:02.111 { 00:05:02.111 "subsystem": "bdev", 00:05:02.111 "config": [ 00:05:02.111 { 00:05:02.111 "method": "bdev_set_options", 00:05:02.111 "params": { 00:05:02.111 "bdev_io_pool_size": 65535, 00:05:02.111 "bdev_io_cache_size": 256, 00:05:02.111 "bdev_auto_examine": true, 00:05:02.111 "iobuf_small_cache_size": 128, 00:05:02.111 "iobuf_large_cache_size": 16 00:05:02.111 } 00:05:02.111 }, 00:05:02.111 { 00:05:02.111 "method": "bdev_raid_set_options", 00:05:02.111 "params": { 00:05:02.111 "process_window_size_kb": 1024, 00:05:02.111 "process_max_bandwidth_mb_sec": 0 00:05:02.111 } 00:05:02.111 }, 00:05:02.111 { 00:05:02.111 "method": "bdev_iscsi_set_options", 00:05:02.111 "params": { 00:05:02.111 "timeout_sec": 30 00:05:02.111 } 00:05:02.111 }, 00:05:02.111 { 00:05:02.111 "method": "bdev_nvme_set_options", 00:05:02.111 "params": { 00:05:02.111 "action_on_timeout": "none", 00:05:02.111 "timeout_us": 0, 00:05:02.111 "timeout_admin_us": 0, 00:05:02.111 "keep_alive_timeout_ms": 10000, 00:05:02.111 "arbitration_burst": 0, 00:05:02.111 "low_priority_weight": 0, 00:05:02.111 "medium_priority_weight": 0, 00:05:02.111 "high_priority_weight": 0, 00:05:02.111 "nvme_adminq_poll_period_us": 10000, 00:05:02.111 "nvme_ioq_poll_period_us": 0, 00:05:02.111 "io_queue_requests": 0, 00:05:02.111 "delay_cmd_submit": true, 00:05:02.111 "transport_retry_count": 4, 00:05:02.111 "bdev_retry_count": 3, 00:05:02.111 "transport_ack_timeout": 0, 00:05:02.111 "ctrlr_loss_timeout_sec": 0, 00:05:02.111 "reconnect_delay_sec": 0, 00:05:02.111 "fast_io_fail_timeout_sec": 0, 00:05:02.111 "disable_auto_failback": false, 00:05:02.111 "generate_uuids": false, 00:05:02.111 "transport_tos": 0, 00:05:02.111 "nvme_error_stat": false, 00:05:02.111 "rdma_srq_size": 0, 00:05:02.111 "io_path_stat": false, 00:05:02.111 "allow_accel_sequence": false, 00:05:02.111 "rdma_max_cq_size": 0, 00:05:02.111 "rdma_cm_event_timeout_ms": 0, 00:05:02.111 "dhchap_digests": [ 00:05:02.111 "sha256", 00:05:02.111 "sha384", 00:05:02.111 "sha512" 00:05:02.111 ], 00:05:02.111 "dhchap_dhgroups": [ 00:05:02.111 "null", 00:05:02.111 "ffdhe2048", 00:05:02.111 "ffdhe3072", 00:05:02.111 "ffdhe4096", 00:05:02.111 "ffdhe6144", 00:05:02.111 "ffdhe8192" 00:05:02.111 ] 00:05:02.111 } 00:05:02.111 }, 00:05:02.111 { 00:05:02.111 "method": "bdev_nvme_set_hotplug", 00:05:02.111 "params": { 00:05:02.111 "period_us": 100000, 00:05:02.111 "enable": false 00:05:02.111 } 00:05:02.111 }, 00:05:02.111 { 00:05:02.111 "method": "bdev_wait_for_examine" 00:05:02.111 } 00:05:02.111 ] 00:05:02.111 }, 00:05:02.111 { 00:05:02.111 "subsystem": "scsi", 00:05:02.111 "config": null 00:05:02.111 }, 00:05:02.111 { 00:05:02.111 "subsystem": "scheduler", 00:05:02.111 "config": [ 00:05:02.111 { 00:05:02.111 "method": "framework_set_scheduler", 00:05:02.111 "params": { 00:05:02.111 "name": "static" 00:05:02.111 } 00:05:02.111 } 00:05:02.111 ] 00:05:02.111 }, 00:05:02.111 { 00:05:02.111 "subsystem": "vhost_scsi", 00:05:02.111 "config": [] 00:05:02.111 }, 00:05:02.111 { 00:05:02.111 "subsystem": "vhost_blk", 00:05:02.111 "config": [] 00:05:02.111 }, 00:05:02.111 { 00:05:02.111 "subsystem": "ublk", 00:05:02.111 "config": [] 00:05:02.111 }, 00:05:02.111 { 00:05:02.111 "subsystem": "nbd", 00:05:02.111 "config": [] 00:05:02.111 }, 00:05:02.111 { 00:05:02.111 "subsystem": "nvmf", 00:05:02.111 "config": [ 00:05:02.111 { 00:05:02.111 "method": "nvmf_set_config", 00:05:02.111 "params": { 00:05:02.111 "discovery_filter": "match_any", 00:05:02.111 "admin_cmd_passthru": { 00:05:02.111 "identify_ctrlr": false 00:05:02.111 }, 00:05:02.111 "dhchap_digests": [ 00:05:02.111 "sha256", 00:05:02.111 "sha384", 00:05:02.111 "sha512" 00:05:02.111 ], 00:05:02.111 "dhchap_dhgroups": [ 00:05:02.111 "null", 00:05:02.111 "ffdhe2048", 00:05:02.111 "ffdhe3072", 00:05:02.111 "ffdhe4096", 00:05:02.111 "ffdhe6144", 00:05:02.111 "ffdhe8192" 00:05:02.111 ] 00:05:02.111 } 00:05:02.111 }, 00:05:02.111 { 00:05:02.111 "method": "nvmf_set_max_subsystems", 00:05:02.111 "params": { 00:05:02.111 "max_subsystems": 1024 00:05:02.111 } 00:05:02.111 }, 00:05:02.111 { 00:05:02.111 "method": "nvmf_set_crdt", 00:05:02.111 "params": { 00:05:02.111 "crdt1": 0, 00:05:02.111 "crdt2": 0, 00:05:02.111 "crdt3": 0 00:05:02.111 } 00:05:02.111 }, 00:05:02.111 { 00:05:02.111 "method": "nvmf_create_transport", 00:05:02.111 "params": { 00:05:02.111 "trtype": "TCP", 00:05:02.111 "max_queue_depth": 128, 00:05:02.111 "max_io_qpairs_per_ctrlr": 127, 00:05:02.111 "in_capsule_data_size": 4096, 00:05:02.111 "max_io_size": 131072, 00:05:02.111 "io_unit_size": 131072, 00:05:02.111 "max_aq_depth": 128, 00:05:02.111 "num_shared_buffers": 511, 00:05:02.111 "buf_cache_size": 4294967295, 00:05:02.111 "dif_insert_or_strip": false, 00:05:02.111 "zcopy": false, 00:05:02.111 "c2h_success": true, 00:05:02.111 "sock_priority": 0, 00:05:02.111 "abort_timeout_sec": 1, 00:05:02.111 "ack_timeout": 0, 00:05:02.111 "data_wr_pool_size": 0 00:05:02.111 } 00:05:02.111 } 00:05:02.111 ] 00:05:02.111 }, 00:05:02.111 { 00:05:02.111 "subsystem": "iscsi", 00:05:02.111 "config": [ 00:05:02.111 { 00:05:02.111 "method": "iscsi_set_options", 00:05:02.111 "params": { 00:05:02.111 "node_base": "iqn.2016-06.io.spdk", 00:05:02.111 "max_sessions": 128, 00:05:02.111 "max_connections_per_session": 2, 00:05:02.111 "max_queue_depth": 64, 00:05:02.111 "default_time2wait": 2, 00:05:02.111 "default_time2retain": 20, 00:05:02.111 "first_burst_length": 8192, 00:05:02.111 "immediate_data": true, 00:05:02.111 "allow_duplicated_isid": false, 00:05:02.111 "error_recovery_level": 0, 00:05:02.111 "nop_timeout": 60, 00:05:02.111 "nop_in_interval": 30, 00:05:02.111 "disable_chap": false, 00:05:02.111 "require_chap": false, 00:05:02.112 "mutual_chap": false, 00:05:02.112 "chap_group": 0, 00:05:02.112 "max_large_datain_per_connection": 64, 00:05:02.112 "max_r2t_per_connection": 4, 00:05:02.112 "pdu_pool_size": 36864, 00:05:02.112 "immediate_data_pool_size": 16384, 00:05:02.112 "data_out_pool_size": 2048 00:05:02.112 } 00:05:02.112 } 00:05:02.112 ] 00:05:02.112 } 00:05:02.112 ] 00:05:02.112 } 00:05:02.112 23:49:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:02.112 23:49:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57781 00:05:02.112 23:49:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57781 ']' 00:05:02.112 23:49:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57781 00:05:02.112 23:49:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:02.112 23:49:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:02.112 23:49:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57781 00:05:02.112 killing process with pid 57781 00:05:02.112 23:49:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:02.112 23:49:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:02.112 23:49:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57781' 00:05:02.112 23:49:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57781 00:05:02.112 23:49:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57781 00:05:04.015 23:49:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57826 00:05:04.015 23:49:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:04.015 23:49:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:09.286 23:49:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57826 00:05:09.286 23:49:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57826 ']' 00:05:09.286 23:49:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57826 00:05:09.286 23:49:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:09.286 23:49:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:09.286 23:49:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57826 00:05:09.286 killing process with pid 57826 00:05:09.286 23:49:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:09.286 23:49:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:09.286 23:49:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57826' 00:05:09.286 23:49:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57826 00:05:09.286 23:49:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57826 00:05:10.666 23:49:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:10.666 23:49:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:10.666 00:05:10.666 real 0m9.828s 00:05:10.666 user 0m9.497s 00:05:10.666 sys 0m0.704s 00:05:10.666 23:49:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:10.666 ************************************ 00:05:10.666 END TEST skip_rpc_with_json 00:05:10.666 ************************************ 00:05:10.666 23:49:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:10.666 23:49:17 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:10.666 23:49:17 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:10.666 23:49:17 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:10.666 23:49:17 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.666 ************************************ 00:05:10.666 START TEST skip_rpc_with_delay 00:05:10.666 ************************************ 00:05:10.666 23:49:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:05:10.666 23:49:17 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:10.666 23:49:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:05:10.666 23:49:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:10.666 23:49:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:10.666 23:49:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:10.666 23:49:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:10.666 23:49:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:10.666 23:49:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:10.666 23:49:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:10.666 23:49:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:10.666 23:49:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:10.666 23:49:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:10.925 [2024-11-18 23:49:17.384617] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:10.925 23:49:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:05:10.925 23:49:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:10.925 23:49:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:10.925 23:49:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:10.925 00:05:10.925 real 0m0.207s 00:05:10.925 user 0m0.112s 00:05:10.925 sys 0m0.093s 00:05:10.925 ************************************ 00:05:10.925 END TEST skip_rpc_with_delay 00:05:10.925 ************************************ 00:05:10.925 23:49:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:10.925 23:49:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:10.925 23:49:17 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:10.925 23:49:17 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:10.925 23:49:17 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:10.925 23:49:17 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:10.925 23:49:17 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:10.925 23:49:17 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.925 ************************************ 00:05:10.925 START TEST exit_on_failed_rpc_init 00:05:10.925 ************************************ 00:05:10.925 23:49:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:05:10.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:10.925 23:49:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57954 00:05:10.925 23:49:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57954 00:05:10.925 23:49:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57954 ']' 00:05:10.925 23:49:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:10.925 23:49:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:10.925 23:49:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:10.925 23:49:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:10.926 23:49:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:10.926 23:49:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:11.185 [2024-11-18 23:49:17.641440] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:05:11.185 [2024-11-18 23:49:17.641627] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57954 ] 00:05:11.185 [2024-11-18 23:49:17.823740] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.445 [2024-11-18 23:49:17.913964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.445 [2024-11-18 23:49:18.102079] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:12.013 23:49:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:12.014 23:49:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:05:12.014 23:49:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:12.014 23:49:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:12.014 23:49:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:05:12.014 23:49:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:12.014 23:49:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:12.014 23:49:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:12.014 23:49:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:12.014 23:49:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:12.014 23:49:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:12.014 23:49:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:12.014 23:49:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:12.014 23:49:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:12.014 23:49:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:12.014 [2024-11-18 23:49:18.695512] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:05:12.014 [2024-11-18 23:49:18.695947] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57972 ] 00:05:12.273 [2024-11-18 23:49:18.876050] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.532 [2024-11-18 23:49:19.003289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:12.532 [2024-11-18 23:49:19.003453] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:12.532 [2024-11-18 23:49:19.003492] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:12.532 [2024-11-18 23:49:19.003514] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:12.791 23:49:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:05:12.791 23:49:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:12.791 23:49:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:05:12.791 23:49:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:05:12.791 23:49:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:05:12.791 23:49:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:12.791 23:49:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:12.791 23:49:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57954 00:05:12.791 23:49:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57954 ']' 00:05:12.791 23:49:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57954 00:05:12.791 23:49:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:05:12.791 23:49:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:12.791 23:49:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57954 00:05:12.791 killing process with pid 57954 00:05:12.791 23:49:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:12.791 23:49:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:12.791 23:49:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57954' 00:05:12.791 23:49:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57954 00:05:12.791 23:49:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57954 00:05:14.696 00:05:14.696 real 0m3.550s 00:05:14.696 user 0m4.046s 00:05:14.696 sys 0m0.533s 00:05:14.696 23:49:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:14.696 ************************************ 00:05:14.696 END TEST exit_on_failed_rpc_init 00:05:14.696 ************************************ 00:05:14.696 23:49:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:14.696 23:49:21 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:14.696 00:05:14.696 real 0m20.769s 00:05:14.696 user 0m20.220s 00:05:14.696 sys 0m1.838s 00:05:14.696 23:49:21 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:14.696 23:49:21 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.696 ************************************ 00:05:14.696 END TEST skip_rpc 00:05:14.696 ************************************ 00:05:14.696 23:49:21 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:14.696 23:49:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:14.696 23:49:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:14.696 23:49:21 -- common/autotest_common.sh@10 -- # set +x 00:05:14.696 ************************************ 00:05:14.696 START TEST rpc_client 00:05:14.696 ************************************ 00:05:14.696 23:49:21 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:14.696 * Looking for test storage... 00:05:14.696 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:14.696 23:49:21 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:14.697 23:49:21 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:05:14.697 23:49:21 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:14.697 23:49:21 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:14.697 23:49:21 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:14.697 23:49:21 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:14.697 23:49:21 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:14.697 23:49:21 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:14.697 23:49:21 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:14.697 23:49:21 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:14.697 23:49:21 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:14.697 23:49:21 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:14.697 23:49:21 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:14.697 23:49:21 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:14.697 23:49:21 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:14.697 23:49:21 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:14.697 23:49:21 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:14.697 23:49:21 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:14.697 23:49:21 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:14.697 23:49:21 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:14.697 23:49:21 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:14.697 23:49:21 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:14.697 23:49:21 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:14.697 23:49:21 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:14.697 23:49:21 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:14.697 23:49:21 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:14.697 23:49:21 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:14.697 23:49:21 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:14.697 23:49:21 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:14.697 23:49:21 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:14.697 23:49:21 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:14.697 23:49:21 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:14.697 23:49:21 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:14.697 23:49:21 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:14.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.697 --rc genhtml_branch_coverage=1 00:05:14.697 --rc genhtml_function_coverage=1 00:05:14.697 --rc genhtml_legend=1 00:05:14.697 --rc geninfo_all_blocks=1 00:05:14.697 --rc geninfo_unexecuted_blocks=1 00:05:14.697 00:05:14.697 ' 00:05:14.697 23:49:21 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:14.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.697 --rc genhtml_branch_coverage=1 00:05:14.697 --rc genhtml_function_coverage=1 00:05:14.697 --rc genhtml_legend=1 00:05:14.697 --rc geninfo_all_blocks=1 00:05:14.697 --rc geninfo_unexecuted_blocks=1 00:05:14.697 00:05:14.697 ' 00:05:14.697 23:49:21 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:14.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.697 --rc genhtml_branch_coverage=1 00:05:14.697 --rc genhtml_function_coverage=1 00:05:14.697 --rc genhtml_legend=1 00:05:14.697 --rc geninfo_all_blocks=1 00:05:14.697 --rc geninfo_unexecuted_blocks=1 00:05:14.697 00:05:14.697 ' 00:05:14.697 23:49:21 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:14.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.697 --rc genhtml_branch_coverage=1 00:05:14.697 --rc genhtml_function_coverage=1 00:05:14.697 --rc genhtml_legend=1 00:05:14.697 --rc geninfo_all_blocks=1 00:05:14.697 --rc geninfo_unexecuted_blocks=1 00:05:14.697 00:05:14.697 ' 00:05:14.697 23:49:21 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:14.697 OK 00:05:14.957 23:49:21 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:14.957 00:05:14.957 real 0m0.242s 00:05:14.957 user 0m0.147s 00:05:14.957 sys 0m0.109s 00:05:14.957 23:49:21 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:14.957 23:49:21 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:14.957 ************************************ 00:05:14.957 END TEST rpc_client 00:05:14.957 ************************************ 00:05:14.957 23:49:21 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:14.957 23:49:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:14.957 23:49:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:14.957 23:49:21 -- common/autotest_common.sh@10 -- # set +x 00:05:14.957 ************************************ 00:05:14.957 START TEST json_config 00:05:14.957 ************************************ 00:05:14.957 23:49:21 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:14.957 23:49:21 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:14.957 23:49:21 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:05:14.957 23:49:21 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:14.957 23:49:21 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:14.957 23:49:21 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:14.957 23:49:21 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:14.957 23:49:21 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:14.957 23:49:21 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:14.957 23:49:21 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:14.957 23:49:21 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:14.957 23:49:21 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:14.957 23:49:21 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:14.957 23:49:21 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:14.957 23:49:21 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:14.957 23:49:21 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:14.957 23:49:21 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:14.957 23:49:21 json_config -- scripts/common.sh@345 -- # : 1 00:05:14.957 23:49:21 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:14.957 23:49:21 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:14.957 23:49:21 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:14.957 23:49:21 json_config -- scripts/common.sh@353 -- # local d=1 00:05:14.957 23:49:21 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:14.957 23:49:21 json_config -- scripts/common.sh@355 -- # echo 1 00:05:14.957 23:49:21 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:14.957 23:49:21 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:14.957 23:49:21 json_config -- scripts/common.sh@353 -- # local d=2 00:05:14.957 23:49:21 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:14.957 23:49:21 json_config -- scripts/common.sh@355 -- # echo 2 00:05:14.957 23:49:21 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:14.957 23:49:21 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:14.957 23:49:21 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:14.957 23:49:21 json_config -- scripts/common.sh@368 -- # return 0 00:05:14.957 23:49:21 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:14.957 23:49:21 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:14.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.957 --rc genhtml_branch_coverage=1 00:05:14.957 --rc genhtml_function_coverage=1 00:05:14.957 --rc genhtml_legend=1 00:05:14.957 --rc geninfo_all_blocks=1 00:05:14.957 --rc geninfo_unexecuted_blocks=1 00:05:14.957 00:05:14.957 ' 00:05:14.957 23:49:21 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:14.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.957 --rc genhtml_branch_coverage=1 00:05:14.957 --rc genhtml_function_coverage=1 00:05:14.957 --rc genhtml_legend=1 00:05:14.957 --rc geninfo_all_blocks=1 00:05:14.957 --rc geninfo_unexecuted_blocks=1 00:05:14.957 00:05:14.957 ' 00:05:14.957 23:49:21 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:14.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.957 --rc genhtml_branch_coverage=1 00:05:14.957 --rc genhtml_function_coverage=1 00:05:14.957 --rc genhtml_legend=1 00:05:14.957 --rc geninfo_all_blocks=1 00:05:14.957 --rc geninfo_unexecuted_blocks=1 00:05:14.957 00:05:14.957 ' 00:05:14.957 23:49:21 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:14.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.957 --rc genhtml_branch_coverage=1 00:05:14.957 --rc genhtml_function_coverage=1 00:05:14.957 --rc genhtml_legend=1 00:05:14.957 --rc geninfo_all_blocks=1 00:05:14.957 --rc geninfo_unexecuted_blocks=1 00:05:14.957 00:05:14.957 ' 00:05:14.957 23:49:21 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:14.957 23:49:21 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:14.957 23:49:21 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:14.957 23:49:21 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:14.957 23:49:21 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:14.957 23:49:21 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:14.957 23:49:21 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:14.957 23:49:21 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:14.957 23:49:21 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:14.957 23:49:21 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:14.957 23:49:21 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:14.957 23:49:21 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:14.957 23:49:21 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:05:14.957 23:49:21 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:05:14.957 23:49:21 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:14.957 23:49:21 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:14.957 23:49:21 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:14.957 23:49:21 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:14.957 23:49:21 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:14.957 23:49:21 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:14.957 23:49:21 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:14.957 23:49:21 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:14.957 23:49:21 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:14.957 23:49:21 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:14.957 23:49:21 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:14.958 23:49:21 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:14.958 23:49:21 json_config -- paths/export.sh@5 -- # export PATH 00:05:14.958 23:49:21 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:14.958 23:49:21 json_config -- nvmf/common.sh@51 -- # : 0 00:05:14.958 23:49:21 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:14.958 23:49:21 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:14.958 23:49:21 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:14.958 23:49:21 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:14.958 23:49:21 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:14.958 23:49:21 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:14.958 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:14.958 23:49:21 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:14.958 23:49:21 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:14.958 23:49:21 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:14.958 23:49:21 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:14.958 23:49:21 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:14.958 23:49:21 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:14.958 23:49:21 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:14.958 23:49:21 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:14.958 23:49:21 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:14.958 23:49:21 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:14.958 23:49:21 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:14.958 23:49:21 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:14.958 23:49:21 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:14.958 23:49:21 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:14.958 23:49:21 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:05:14.958 23:49:21 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:14.958 23:49:21 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:14.958 23:49:21 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:14.958 INFO: JSON configuration test init 00:05:14.958 23:49:21 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:05:14.958 23:49:21 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:05:14.958 23:49:21 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:05:14.958 23:49:21 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:14.958 23:49:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:14.958 23:49:21 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:05:14.958 23:49:21 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:14.958 23:49:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:14.958 23:49:21 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:05:14.958 23:49:21 json_config -- json_config/common.sh@9 -- # local app=target 00:05:14.958 23:49:21 json_config -- json_config/common.sh@10 -- # shift 00:05:14.958 23:49:21 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:14.958 23:49:21 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:14.958 23:49:21 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:14.958 23:49:21 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:14.958 23:49:21 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:14.958 23:49:21 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=58131 00:05:14.958 Waiting for target to run... 00:05:14.958 23:49:21 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:14.958 23:49:21 json_config -- json_config/common.sh@25 -- # waitforlisten 58131 /var/tmp/spdk_tgt.sock 00:05:14.958 23:49:21 json_config -- common/autotest_common.sh@835 -- # '[' -z 58131 ']' 00:05:14.958 23:49:21 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:14.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:14.958 23:49:21 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:14.958 23:49:21 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:14.958 23:49:21 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:14.958 23:49:21 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:14.958 23:49:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:15.217 [2024-11-18 23:49:21.769376] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:05:15.217 [2024-11-18 23:49:21.769544] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58131 ] 00:05:15.477 [2024-11-18 23:49:22.125288] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.736 [2024-11-18 23:49:22.206298] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.305 23:49:22 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:16.305 00:05:16.305 23:49:22 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:16.305 23:49:22 json_config -- json_config/common.sh@26 -- # echo '' 00:05:16.305 23:49:22 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:05:16.305 23:49:22 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:05:16.305 23:49:22 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:16.305 23:49:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:16.305 23:49:22 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:05:16.305 23:49:22 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:05:16.305 23:49:22 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:16.305 23:49:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:16.305 23:49:22 json_config -- json_config/json_config.sh@280 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:16.305 23:49:22 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:05:16.305 23:49:22 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:16.564 [2024-11-18 23:49:23.170492] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:17.131 23:49:23 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:05:17.131 23:49:23 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:17.131 23:49:23 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:17.131 23:49:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:17.131 23:49:23 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:17.131 23:49:23 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:17.131 23:49:23 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:17.131 23:49:23 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:05:17.131 23:49:23 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:05:17.131 23:49:23 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:17.131 23:49:23 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:17.131 23:49:23 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:17.390 23:49:23 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:05:17.390 23:49:23 json_config -- json_config/json_config.sh@51 -- # local get_types 00:05:17.390 23:49:23 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:05:17.390 23:49:23 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:05:17.390 23:49:23 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:05:17.390 23:49:23 json_config -- json_config/json_config.sh@54 -- # sort 00:05:17.390 23:49:23 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:05:17.390 23:49:23 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:05:17.390 23:49:23 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:05:17.390 23:49:23 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:05:17.390 23:49:23 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:17.390 23:49:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:17.390 23:49:24 json_config -- json_config/json_config.sh@62 -- # return 0 00:05:17.390 23:49:24 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:05:17.390 23:49:24 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:05:17.390 23:49:24 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:05:17.390 23:49:24 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:05:17.390 23:49:24 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:05:17.390 23:49:24 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:05:17.390 23:49:24 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:17.390 23:49:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:17.390 23:49:24 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:17.390 23:49:24 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:05:17.390 23:49:24 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:05:17.390 23:49:24 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:17.390 23:49:24 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:17.649 MallocForNvmf0 00:05:17.649 23:49:24 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:17.649 23:49:24 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:18.216 MallocForNvmf1 00:05:18.216 23:49:24 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:18.216 23:49:24 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:18.216 [2024-11-18 23:49:24.870697] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:18.216 23:49:24 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:18.216 23:49:24 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:18.475 23:49:25 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:18.475 23:49:25 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:18.734 23:49:25 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:18.734 23:49:25 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:18.993 23:49:25 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:18.993 23:49:25 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:19.252 [2024-11-18 23:49:25.899736] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:19.252 23:49:25 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:05:19.253 23:49:25 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:19.253 23:49:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:19.511 23:49:25 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:05:19.511 23:49:25 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:19.511 23:49:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:19.511 23:49:26 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:05:19.511 23:49:26 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:19.511 23:49:26 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:19.771 MallocBdevForConfigChangeCheck 00:05:19.771 23:49:26 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:05:19.771 23:49:26 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:19.771 23:49:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:19.771 23:49:26 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:05:19.771 23:49:26 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:20.029 INFO: shutting down applications... 00:05:20.029 23:49:26 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:05:20.029 23:49:26 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:05:20.029 23:49:26 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:05:20.029 23:49:26 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:05:20.029 23:49:26 json_config -- json_config/json_config.sh@340 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:20.597 Calling clear_iscsi_subsystem 00:05:20.597 Calling clear_nvmf_subsystem 00:05:20.597 Calling clear_nbd_subsystem 00:05:20.597 Calling clear_ublk_subsystem 00:05:20.597 Calling clear_vhost_blk_subsystem 00:05:20.597 Calling clear_vhost_scsi_subsystem 00:05:20.597 Calling clear_bdev_subsystem 00:05:20.597 23:49:27 json_config -- json_config/json_config.sh@344 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:05:20.597 23:49:27 json_config -- json_config/json_config.sh@350 -- # count=100 00:05:20.597 23:49:27 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:05:20.597 23:49:27 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:20.597 23:49:27 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:20.597 23:49:27 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:05:20.856 23:49:27 json_config -- json_config/json_config.sh@352 -- # break 00:05:20.856 23:49:27 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:05:20.856 23:49:27 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:05:20.856 23:49:27 json_config -- json_config/common.sh@31 -- # local app=target 00:05:20.856 23:49:27 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:20.856 23:49:27 json_config -- json_config/common.sh@35 -- # [[ -n 58131 ]] 00:05:20.856 23:49:27 json_config -- json_config/common.sh@38 -- # kill -SIGINT 58131 00:05:20.856 23:49:27 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:20.856 23:49:27 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:20.856 23:49:27 json_config -- json_config/common.sh@41 -- # kill -0 58131 00:05:20.856 23:49:27 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:21.423 23:49:27 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:21.423 23:49:27 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:21.423 23:49:27 json_config -- json_config/common.sh@41 -- # kill -0 58131 00:05:21.423 23:49:27 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:21.990 23:49:28 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:21.990 23:49:28 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:21.990 23:49:28 json_config -- json_config/common.sh@41 -- # kill -0 58131 00:05:21.990 23:49:28 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:21.990 23:49:28 json_config -- json_config/common.sh@43 -- # break 00:05:21.990 23:49:28 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:21.990 SPDK target shutdown done 00:05:21.990 23:49:28 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:21.990 INFO: relaunching applications... 00:05:21.990 23:49:28 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:05:21.990 23:49:28 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:21.990 23:49:28 json_config -- json_config/common.sh@9 -- # local app=target 00:05:21.990 23:49:28 json_config -- json_config/common.sh@10 -- # shift 00:05:21.990 23:49:28 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:21.990 23:49:28 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:21.990 23:49:28 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:21.990 23:49:28 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:21.990 23:49:28 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:21.990 23:49:28 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=58340 00:05:21.990 Waiting for target to run... 00:05:21.990 23:49:28 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:21.990 23:49:28 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:21.990 23:49:28 json_config -- json_config/common.sh@25 -- # waitforlisten 58340 /var/tmp/spdk_tgt.sock 00:05:21.990 23:49:28 json_config -- common/autotest_common.sh@835 -- # '[' -z 58340 ']' 00:05:21.990 23:49:28 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:21.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:21.990 23:49:28 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:21.990 23:49:28 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:21.990 23:49:28 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:21.990 23:49:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:21.990 [2024-11-18 23:49:28.557437] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:05:21.990 [2024-11-18 23:49:28.557601] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58340 ] 00:05:22.249 [2024-11-18 23:49:28.876511] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.509 [2024-11-18 23:49:28.958621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.766 [2024-11-18 23:49:29.267044] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:23.403 [2024-11-18 23:49:29.828341] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:23.403 [2024-11-18 23:49:29.860544] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:23.403 00:05:23.403 INFO: Checking if target configuration is the same... 00:05:23.403 23:49:29 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:23.403 23:49:29 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:23.403 23:49:29 json_config -- json_config/common.sh@26 -- # echo '' 00:05:23.403 23:49:29 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:05:23.403 23:49:29 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:23.403 23:49:29 json_config -- json_config/json_config.sh@385 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:23.403 23:49:29 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:05:23.403 23:49:29 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:23.403 + '[' 2 -ne 2 ']' 00:05:23.403 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:23.403 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:23.403 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:23.403 +++ basename /dev/fd/62 00:05:23.403 ++ mktemp /tmp/62.XXX 00:05:23.403 + tmp_file_1=/tmp/62.mi7 00:05:23.403 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:23.403 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:23.403 + tmp_file_2=/tmp/spdk_tgt_config.json.OY9 00:05:23.403 + ret=0 00:05:23.403 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:23.676 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:23.934 + diff -u /tmp/62.mi7 /tmp/spdk_tgt_config.json.OY9 00:05:23.934 INFO: JSON config files are the same 00:05:23.934 + echo 'INFO: JSON config files are the same' 00:05:23.934 + rm /tmp/62.mi7 /tmp/spdk_tgt_config.json.OY9 00:05:23.934 + exit 0 00:05:23.934 INFO: changing configuration and checking if this can be detected... 00:05:23.934 23:49:30 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:05:23.934 23:49:30 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:23.934 23:49:30 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:23.934 23:49:30 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:24.193 23:49:30 json_config -- json_config/json_config.sh@394 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:24.193 23:49:30 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:05:24.193 23:49:30 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:24.193 + '[' 2 -ne 2 ']' 00:05:24.193 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:24.193 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:24.193 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:24.193 +++ basename /dev/fd/62 00:05:24.193 ++ mktemp /tmp/62.XXX 00:05:24.193 + tmp_file_1=/tmp/62.N01 00:05:24.193 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:24.193 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:24.193 + tmp_file_2=/tmp/spdk_tgt_config.json.WT2 00:05:24.193 + ret=0 00:05:24.193 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:24.452 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:24.711 + diff -u /tmp/62.N01 /tmp/spdk_tgt_config.json.WT2 00:05:24.711 + ret=1 00:05:24.711 + echo '=== Start of file: /tmp/62.N01 ===' 00:05:24.711 + cat /tmp/62.N01 00:05:24.711 + echo '=== End of file: /tmp/62.N01 ===' 00:05:24.711 + echo '' 00:05:24.711 + echo '=== Start of file: /tmp/spdk_tgt_config.json.WT2 ===' 00:05:24.711 + cat /tmp/spdk_tgt_config.json.WT2 00:05:24.711 + echo '=== End of file: /tmp/spdk_tgt_config.json.WT2 ===' 00:05:24.711 + echo '' 00:05:24.711 + rm /tmp/62.N01 /tmp/spdk_tgt_config.json.WT2 00:05:24.711 + exit 1 00:05:24.711 INFO: configuration change detected. 00:05:24.711 23:49:31 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:05:24.711 23:49:31 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:05:24.711 23:49:31 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:05:24.711 23:49:31 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:24.711 23:49:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:24.711 23:49:31 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:05:24.711 23:49:31 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:05:24.711 23:49:31 json_config -- json_config/json_config.sh@324 -- # [[ -n 58340 ]] 00:05:24.711 23:49:31 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:05:24.711 23:49:31 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:05:24.711 23:49:31 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:24.711 23:49:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:24.711 23:49:31 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:05:24.711 23:49:31 json_config -- json_config/json_config.sh@200 -- # uname -s 00:05:24.711 23:49:31 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:05:24.711 23:49:31 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:05:24.711 23:49:31 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:05:24.711 23:49:31 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:05:24.711 23:49:31 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:24.711 23:49:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:24.711 23:49:31 json_config -- json_config/json_config.sh@330 -- # killprocess 58340 00:05:24.711 23:49:31 json_config -- common/autotest_common.sh@954 -- # '[' -z 58340 ']' 00:05:24.711 23:49:31 json_config -- common/autotest_common.sh@958 -- # kill -0 58340 00:05:24.711 23:49:31 json_config -- common/autotest_common.sh@959 -- # uname 00:05:24.711 23:49:31 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:24.711 23:49:31 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58340 00:05:24.711 killing process with pid 58340 00:05:24.711 23:49:31 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:24.711 23:49:31 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:24.711 23:49:31 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58340' 00:05:24.711 23:49:31 json_config -- common/autotest_common.sh@973 -- # kill 58340 00:05:24.711 23:49:31 json_config -- common/autotest_common.sh@978 -- # wait 58340 00:05:25.649 23:49:31 json_config -- json_config/json_config.sh@333 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:25.649 23:49:31 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:05:25.649 23:49:31 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:25.649 23:49:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:25.649 INFO: Success 00:05:25.649 23:49:32 json_config -- json_config/json_config.sh@335 -- # return 0 00:05:25.649 23:49:32 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:05:25.649 ************************************ 00:05:25.649 END TEST json_config 00:05:25.649 ************************************ 00:05:25.649 00:05:25.649 real 0m10.582s 00:05:25.649 user 0m14.427s 00:05:25.649 sys 0m1.786s 00:05:25.649 23:49:32 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:25.649 23:49:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:25.649 23:49:32 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:25.649 23:49:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:25.649 23:49:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:25.649 23:49:32 -- common/autotest_common.sh@10 -- # set +x 00:05:25.649 ************************************ 00:05:25.649 START TEST json_config_extra_key 00:05:25.649 ************************************ 00:05:25.649 23:49:32 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:25.649 23:49:32 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:25.649 23:49:32 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:05:25.649 23:49:32 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:25.649 23:49:32 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:25.649 23:49:32 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:25.649 23:49:32 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:25.649 23:49:32 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:25.649 23:49:32 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:25.649 23:49:32 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:25.649 23:49:32 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:25.649 23:49:32 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:25.649 23:49:32 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:25.649 23:49:32 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:25.649 23:49:32 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:25.649 23:49:32 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:25.649 23:49:32 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:25.649 23:49:32 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:25.649 23:49:32 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:25.649 23:49:32 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:25.649 23:49:32 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:25.649 23:49:32 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:25.649 23:49:32 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:25.649 23:49:32 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:25.649 23:49:32 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:25.649 23:49:32 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:25.649 23:49:32 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:25.649 23:49:32 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:25.649 23:49:32 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:25.649 23:49:32 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:25.649 23:49:32 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:25.649 23:49:32 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:25.649 23:49:32 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:25.649 23:49:32 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:25.649 23:49:32 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:25.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.649 --rc genhtml_branch_coverage=1 00:05:25.649 --rc genhtml_function_coverage=1 00:05:25.649 --rc genhtml_legend=1 00:05:25.649 --rc geninfo_all_blocks=1 00:05:25.649 --rc geninfo_unexecuted_blocks=1 00:05:25.649 00:05:25.649 ' 00:05:25.649 23:49:32 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:25.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.650 --rc genhtml_branch_coverage=1 00:05:25.650 --rc genhtml_function_coverage=1 00:05:25.650 --rc genhtml_legend=1 00:05:25.650 --rc geninfo_all_blocks=1 00:05:25.650 --rc geninfo_unexecuted_blocks=1 00:05:25.650 00:05:25.650 ' 00:05:25.650 23:49:32 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:25.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.650 --rc genhtml_branch_coverage=1 00:05:25.650 --rc genhtml_function_coverage=1 00:05:25.650 --rc genhtml_legend=1 00:05:25.650 --rc geninfo_all_blocks=1 00:05:25.650 --rc geninfo_unexecuted_blocks=1 00:05:25.650 00:05:25.650 ' 00:05:25.650 23:49:32 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:25.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.650 --rc genhtml_branch_coverage=1 00:05:25.650 --rc genhtml_function_coverage=1 00:05:25.650 --rc genhtml_legend=1 00:05:25.650 --rc geninfo_all_blocks=1 00:05:25.650 --rc geninfo_unexecuted_blocks=1 00:05:25.650 00:05:25.650 ' 00:05:25.650 23:49:32 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:25.650 23:49:32 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:25.650 23:49:32 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:25.650 23:49:32 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:25.650 23:49:32 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:25.650 23:49:32 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:25.650 23:49:32 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:25.650 23:49:32 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:25.650 23:49:32 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:25.650 23:49:32 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:25.650 23:49:32 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:25.650 23:49:32 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:25.650 23:49:32 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:05:25.650 23:49:32 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:05:25.650 23:49:32 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:25.650 23:49:32 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:25.650 23:49:32 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:25.650 23:49:32 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:25.650 23:49:32 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:25.650 23:49:32 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:25.650 23:49:32 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:25.650 23:49:32 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:25.650 23:49:32 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:25.650 23:49:32 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:25.650 23:49:32 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:25.650 23:49:32 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:25.650 23:49:32 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:25.650 23:49:32 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:25.650 23:49:32 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:25.650 23:49:32 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:25.650 23:49:32 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:25.650 23:49:32 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:25.650 23:49:32 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:25.650 23:49:32 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:25.650 23:49:32 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:25.650 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:25.650 23:49:32 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:25.650 23:49:32 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:25.650 23:49:32 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:25.650 23:49:32 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:25.650 23:49:32 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:25.650 23:49:32 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:25.650 23:49:32 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:25.650 23:49:32 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:25.650 23:49:32 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:25.650 23:49:32 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:25.650 23:49:32 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:25.650 23:49:32 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:25.650 23:49:32 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:25.650 23:49:32 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:25.650 INFO: launching applications... 00:05:25.650 23:49:32 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:25.650 23:49:32 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:25.650 Waiting for target to run... 00:05:25.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:25.650 23:49:32 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:25.650 23:49:32 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:25.650 23:49:32 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:25.650 23:49:32 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:25.650 23:49:32 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:25.650 23:49:32 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:25.650 23:49:32 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=58506 00:05:25.650 23:49:32 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:25.650 23:49:32 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 58506 /var/tmp/spdk_tgt.sock 00:05:25.650 23:49:32 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 58506 ']' 00:05:25.650 23:49:32 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:25.650 23:49:32 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:25.650 23:49:32 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:25.650 23:49:32 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:25.650 23:49:32 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:25.650 23:49:32 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:25.909 [2024-11-18 23:49:32.407369] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:05:25.909 [2024-11-18 23:49:32.407562] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58506 ] 00:05:26.168 [2024-11-18 23:49:32.755546] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.168 [2024-11-18 23:49:32.836727] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.427 [2024-11-18 23:49:33.028794] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:26.996 00:05:26.996 INFO: shutting down applications... 00:05:26.996 23:49:33 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:26.996 23:49:33 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:26.996 23:49:33 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:26.996 23:49:33 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:26.996 23:49:33 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:26.996 23:49:33 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:26.996 23:49:33 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:26.996 23:49:33 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 58506 ]] 00:05:26.996 23:49:33 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 58506 00:05:26.996 23:49:33 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:26.996 23:49:33 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:26.996 23:49:33 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58506 00:05:26.996 23:49:33 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:27.255 23:49:33 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:27.255 23:49:33 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:27.255 23:49:33 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58506 00:05:27.255 23:49:33 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:27.823 23:49:34 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:27.824 23:49:34 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:27.824 23:49:34 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58506 00:05:27.824 23:49:34 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:28.392 23:49:34 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:28.392 23:49:34 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:28.392 23:49:34 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58506 00:05:28.392 23:49:34 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:28.960 23:49:35 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:28.961 23:49:35 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:28.961 23:49:35 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58506 00:05:28.961 SPDK target shutdown done 00:05:28.961 Success 00:05:28.961 23:49:35 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:28.961 23:49:35 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:28.961 23:49:35 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:28.961 23:49:35 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:28.961 23:49:35 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:28.961 00:05:28.961 real 0m3.376s 00:05:28.961 user 0m3.343s 00:05:28.961 sys 0m0.495s 00:05:28.961 ************************************ 00:05:28.961 END TEST json_config_extra_key 00:05:28.961 ************************************ 00:05:28.961 23:49:35 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:28.961 23:49:35 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:28.961 23:49:35 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:28.961 23:49:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:28.961 23:49:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:28.961 23:49:35 -- common/autotest_common.sh@10 -- # set +x 00:05:28.961 ************************************ 00:05:28.961 START TEST alias_rpc 00:05:28.961 ************************************ 00:05:28.961 23:49:35 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:28.961 * Looking for test storage... 00:05:28.961 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:28.961 23:49:35 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:28.961 23:49:35 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:28.961 23:49:35 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:29.220 23:49:35 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:29.220 23:49:35 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:29.220 23:49:35 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:29.220 23:49:35 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:29.220 23:49:35 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:29.220 23:49:35 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:29.220 23:49:35 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:29.220 23:49:35 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:29.220 23:49:35 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:29.220 23:49:35 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:29.221 23:49:35 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:29.221 23:49:35 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:29.221 23:49:35 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:29.221 23:49:35 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:29.221 23:49:35 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:29.221 23:49:35 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:29.221 23:49:35 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:29.221 23:49:35 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:29.221 23:49:35 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:29.221 23:49:35 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:29.221 23:49:35 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:29.221 23:49:35 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:29.221 23:49:35 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:29.221 23:49:35 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:29.221 23:49:35 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:29.221 23:49:35 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:29.221 23:49:35 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:29.221 23:49:35 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:29.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.221 23:49:35 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:29.221 23:49:35 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:29.221 23:49:35 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:29.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.221 --rc genhtml_branch_coverage=1 00:05:29.221 --rc genhtml_function_coverage=1 00:05:29.221 --rc genhtml_legend=1 00:05:29.221 --rc geninfo_all_blocks=1 00:05:29.221 --rc geninfo_unexecuted_blocks=1 00:05:29.221 00:05:29.221 ' 00:05:29.221 23:49:35 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:29.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.221 --rc genhtml_branch_coverage=1 00:05:29.221 --rc genhtml_function_coverage=1 00:05:29.221 --rc genhtml_legend=1 00:05:29.221 --rc geninfo_all_blocks=1 00:05:29.221 --rc geninfo_unexecuted_blocks=1 00:05:29.221 00:05:29.221 ' 00:05:29.221 23:49:35 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:29.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.221 --rc genhtml_branch_coverage=1 00:05:29.221 --rc genhtml_function_coverage=1 00:05:29.221 --rc genhtml_legend=1 00:05:29.221 --rc geninfo_all_blocks=1 00:05:29.221 --rc geninfo_unexecuted_blocks=1 00:05:29.221 00:05:29.221 ' 00:05:29.221 23:49:35 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:29.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.221 --rc genhtml_branch_coverage=1 00:05:29.221 --rc genhtml_function_coverage=1 00:05:29.221 --rc genhtml_legend=1 00:05:29.221 --rc geninfo_all_blocks=1 00:05:29.221 --rc geninfo_unexecuted_blocks=1 00:05:29.221 00:05:29.221 ' 00:05:29.221 23:49:35 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:29.221 23:49:35 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=58599 00:05:29.221 23:49:35 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 58599 00:05:29.221 23:49:35 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 58599 ']' 00:05:29.221 23:49:35 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:29.221 23:49:35 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.221 23:49:35 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:29.221 23:49:35 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.221 23:49:35 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:29.221 23:49:35 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.221 [2024-11-18 23:49:35.827592] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:05:29.221 [2024-11-18 23:49:35.827984] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58599 ] 00:05:29.480 [2024-11-18 23:49:36.003848] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.480 [2024-11-18 23:49:36.104040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.739 [2024-11-18 23:49:36.293135] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:30.306 23:49:36 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:30.307 23:49:36 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:30.307 23:49:36 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:30.566 23:49:37 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 58599 00:05:30.566 23:49:37 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 58599 ']' 00:05:30.566 23:49:37 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 58599 00:05:30.566 23:49:37 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:30.566 23:49:37 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:30.566 23:49:37 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58599 00:05:30.566 killing process with pid 58599 00:05:30.566 23:49:37 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:30.566 23:49:37 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:30.566 23:49:37 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58599' 00:05:30.566 23:49:37 alias_rpc -- common/autotest_common.sh@973 -- # kill 58599 00:05:30.566 23:49:37 alias_rpc -- common/autotest_common.sh@978 -- # wait 58599 00:05:32.470 ************************************ 00:05:32.470 END TEST alias_rpc 00:05:32.471 ************************************ 00:05:32.471 00:05:32.471 real 0m3.293s 00:05:32.471 user 0m3.479s 00:05:32.471 sys 0m0.482s 00:05:32.471 23:49:38 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:32.471 23:49:38 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.471 23:49:38 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:32.471 23:49:38 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:32.471 23:49:38 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:32.471 23:49:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:32.471 23:49:38 -- common/autotest_common.sh@10 -- # set +x 00:05:32.471 ************************************ 00:05:32.471 START TEST spdkcli_tcp 00:05:32.471 ************************************ 00:05:32.471 23:49:38 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:32.471 * Looking for test storage... 00:05:32.471 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:32.471 23:49:38 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:32.471 23:49:38 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:05:32.471 23:49:38 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:32.471 23:49:39 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:32.471 23:49:39 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:32.471 23:49:39 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:32.471 23:49:39 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:32.471 23:49:39 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:32.471 23:49:39 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:32.471 23:49:39 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:32.471 23:49:39 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:32.471 23:49:39 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:32.471 23:49:39 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:32.471 23:49:39 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:32.471 23:49:39 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:32.471 23:49:39 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:32.471 23:49:39 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:32.471 23:49:39 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:32.471 23:49:39 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:32.471 23:49:39 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:32.471 23:49:39 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:32.471 23:49:39 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:32.471 23:49:39 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:32.471 23:49:39 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:32.471 23:49:39 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:32.471 23:49:39 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:32.471 23:49:39 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:32.471 23:49:39 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:32.471 23:49:39 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:32.471 23:49:39 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:32.471 23:49:39 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:32.471 23:49:39 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:32.471 23:49:39 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:32.471 23:49:39 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:32.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.471 --rc genhtml_branch_coverage=1 00:05:32.471 --rc genhtml_function_coverage=1 00:05:32.471 --rc genhtml_legend=1 00:05:32.471 --rc geninfo_all_blocks=1 00:05:32.471 --rc geninfo_unexecuted_blocks=1 00:05:32.471 00:05:32.471 ' 00:05:32.471 23:49:39 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:32.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.471 --rc genhtml_branch_coverage=1 00:05:32.471 --rc genhtml_function_coverage=1 00:05:32.471 --rc genhtml_legend=1 00:05:32.471 --rc geninfo_all_blocks=1 00:05:32.471 --rc geninfo_unexecuted_blocks=1 00:05:32.471 00:05:32.471 ' 00:05:32.471 23:49:39 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:32.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.471 --rc genhtml_branch_coverage=1 00:05:32.471 --rc genhtml_function_coverage=1 00:05:32.471 --rc genhtml_legend=1 00:05:32.471 --rc geninfo_all_blocks=1 00:05:32.471 --rc geninfo_unexecuted_blocks=1 00:05:32.471 00:05:32.471 ' 00:05:32.471 23:49:39 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:32.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.471 --rc genhtml_branch_coverage=1 00:05:32.471 --rc genhtml_function_coverage=1 00:05:32.471 --rc genhtml_legend=1 00:05:32.471 --rc geninfo_all_blocks=1 00:05:32.471 --rc geninfo_unexecuted_blocks=1 00:05:32.471 00:05:32.471 ' 00:05:32.471 23:49:39 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:32.471 23:49:39 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:32.471 23:49:39 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:32.471 23:49:39 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:32.471 23:49:39 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:32.471 23:49:39 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:32.471 23:49:39 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:32.471 23:49:39 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:32.471 23:49:39 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:32.471 23:49:39 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=58695 00:05:32.471 23:49:39 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 58695 00:05:32.471 23:49:39 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:32.471 23:49:39 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 58695 ']' 00:05:32.471 23:49:39 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:32.471 23:49:39 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:32.471 23:49:39 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:32.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:32.471 23:49:39 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:32.471 23:49:39 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:32.729 [2024-11-18 23:49:39.173336] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:05:32.729 [2024-11-18 23:49:39.173700] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58695 ] 00:05:32.729 [2024-11-18 23:49:39.358092] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:32.987 [2024-11-18 23:49:39.448173] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.987 [2024-11-18 23:49:39.448185] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:32.987 [2024-11-18 23:49:39.642193] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:33.554 23:49:40 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:33.554 23:49:40 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:33.554 23:49:40 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:33.554 23:49:40 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=58712 00:05:33.554 23:49:40 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:33.814 [ 00:05:33.814 "bdev_malloc_delete", 00:05:33.814 "bdev_malloc_create", 00:05:33.814 "bdev_null_resize", 00:05:33.814 "bdev_null_delete", 00:05:33.814 "bdev_null_create", 00:05:33.814 "bdev_nvme_cuse_unregister", 00:05:33.814 "bdev_nvme_cuse_register", 00:05:33.814 "bdev_opal_new_user", 00:05:33.814 "bdev_opal_set_lock_state", 00:05:33.814 "bdev_opal_delete", 00:05:33.814 "bdev_opal_get_info", 00:05:33.814 "bdev_opal_create", 00:05:33.814 "bdev_nvme_opal_revert", 00:05:33.814 "bdev_nvme_opal_init", 00:05:33.814 "bdev_nvme_send_cmd", 00:05:33.814 "bdev_nvme_set_keys", 00:05:33.814 "bdev_nvme_get_path_iostat", 00:05:33.814 "bdev_nvme_get_mdns_discovery_info", 00:05:33.814 "bdev_nvme_stop_mdns_discovery", 00:05:33.814 "bdev_nvme_start_mdns_discovery", 00:05:33.814 "bdev_nvme_set_multipath_policy", 00:05:33.814 "bdev_nvme_set_preferred_path", 00:05:33.814 "bdev_nvme_get_io_paths", 00:05:33.814 "bdev_nvme_remove_error_injection", 00:05:33.814 "bdev_nvme_add_error_injection", 00:05:33.814 "bdev_nvme_get_discovery_info", 00:05:33.814 "bdev_nvme_stop_discovery", 00:05:33.814 "bdev_nvme_start_discovery", 00:05:33.814 "bdev_nvme_get_controller_health_info", 00:05:33.814 "bdev_nvme_disable_controller", 00:05:33.814 "bdev_nvme_enable_controller", 00:05:33.814 "bdev_nvme_reset_controller", 00:05:33.814 "bdev_nvme_get_transport_statistics", 00:05:33.814 "bdev_nvme_apply_firmware", 00:05:33.814 "bdev_nvme_detach_controller", 00:05:33.814 "bdev_nvme_get_controllers", 00:05:33.814 "bdev_nvme_attach_controller", 00:05:33.814 "bdev_nvme_set_hotplug", 00:05:33.814 "bdev_nvme_set_options", 00:05:33.814 "bdev_passthru_delete", 00:05:33.814 "bdev_passthru_create", 00:05:33.814 "bdev_lvol_set_parent_bdev", 00:05:33.814 "bdev_lvol_set_parent", 00:05:33.814 "bdev_lvol_check_shallow_copy", 00:05:33.814 "bdev_lvol_start_shallow_copy", 00:05:33.814 "bdev_lvol_grow_lvstore", 00:05:33.814 "bdev_lvol_get_lvols", 00:05:33.814 "bdev_lvol_get_lvstores", 00:05:33.814 "bdev_lvol_delete", 00:05:33.814 "bdev_lvol_set_read_only", 00:05:33.814 "bdev_lvol_resize", 00:05:33.814 "bdev_lvol_decouple_parent", 00:05:33.814 "bdev_lvol_inflate", 00:05:33.814 "bdev_lvol_rename", 00:05:33.814 "bdev_lvol_clone_bdev", 00:05:33.814 "bdev_lvol_clone", 00:05:33.814 "bdev_lvol_snapshot", 00:05:33.814 "bdev_lvol_create", 00:05:33.814 "bdev_lvol_delete_lvstore", 00:05:33.814 "bdev_lvol_rename_lvstore", 00:05:33.814 "bdev_lvol_create_lvstore", 00:05:33.814 "bdev_raid_set_options", 00:05:33.814 "bdev_raid_remove_base_bdev", 00:05:33.814 "bdev_raid_add_base_bdev", 00:05:33.814 "bdev_raid_delete", 00:05:33.814 "bdev_raid_create", 00:05:33.814 "bdev_raid_get_bdevs", 00:05:33.814 "bdev_error_inject_error", 00:05:33.814 "bdev_error_delete", 00:05:33.814 "bdev_error_create", 00:05:33.814 "bdev_split_delete", 00:05:33.814 "bdev_split_create", 00:05:33.814 "bdev_delay_delete", 00:05:33.814 "bdev_delay_create", 00:05:33.814 "bdev_delay_update_latency", 00:05:33.814 "bdev_zone_block_delete", 00:05:33.814 "bdev_zone_block_create", 00:05:33.814 "blobfs_create", 00:05:33.814 "blobfs_detect", 00:05:33.814 "blobfs_set_cache_size", 00:05:33.814 "bdev_aio_delete", 00:05:33.814 "bdev_aio_rescan", 00:05:33.814 "bdev_aio_create", 00:05:33.814 "bdev_ftl_set_property", 00:05:33.814 "bdev_ftl_get_properties", 00:05:33.814 "bdev_ftl_get_stats", 00:05:33.814 "bdev_ftl_unmap", 00:05:33.814 "bdev_ftl_unload", 00:05:33.814 "bdev_ftl_delete", 00:05:33.814 "bdev_ftl_load", 00:05:33.814 "bdev_ftl_create", 00:05:33.814 "bdev_virtio_attach_controller", 00:05:33.814 "bdev_virtio_scsi_get_devices", 00:05:33.814 "bdev_virtio_detach_controller", 00:05:33.814 "bdev_virtio_blk_set_hotplug", 00:05:33.814 "bdev_iscsi_delete", 00:05:33.814 "bdev_iscsi_create", 00:05:33.814 "bdev_iscsi_set_options", 00:05:33.814 "bdev_uring_delete", 00:05:33.814 "bdev_uring_rescan", 00:05:33.814 "bdev_uring_create", 00:05:33.814 "accel_error_inject_error", 00:05:33.814 "ioat_scan_accel_module", 00:05:33.814 "dsa_scan_accel_module", 00:05:33.814 "iaa_scan_accel_module", 00:05:33.814 "vfu_virtio_create_fs_endpoint", 00:05:33.814 "vfu_virtio_create_scsi_endpoint", 00:05:33.814 "vfu_virtio_scsi_remove_target", 00:05:33.814 "vfu_virtio_scsi_add_target", 00:05:33.814 "vfu_virtio_create_blk_endpoint", 00:05:33.814 "vfu_virtio_delete_endpoint", 00:05:33.814 "keyring_file_remove_key", 00:05:33.814 "keyring_file_add_key", 00:05:33.814 "keyring_linux_set_options", 00:05:33.814 "fsdev_aio_delete", 00:05:33.814 "fsdev_aio_create", 00:05:33.814 "iscsi_get_histogram", 00:05:33.814 "iscsi_enable_histogram", 00:05:33.814 "iscsi_set_options", 00:05:33.814 "iscsi_get_auth_groups", 00:05:33.814 "iscsi_auth_group_remove_secret", 00:05:33.814 "iscsi_auth_group_add_secret", 00:05:33.814 "iscsi_delete_auth_group", 00:05:33.814 "iscsi_create_auth_group", 00:05:33.814 "iscsi_set_discovery_auth", 00:05:33.814 "iscsi_get_options", 00:05:33.814 "iscsi_target_node_request_logout", 00:05:33.814 "iscsi_target_node_set_redirect", 00:05:33.814 "iscsi_target_node_set_auth", 00:05:33.814 "iscsi_target_node_add_lun", 00:05:33.814 "iscsi_get_stats", 00:05:33.814 "iscsi_get_connections", 00:05:33.814 "iscsi_portal_group_set_auth", 00:05:33.814 "iscsi_start_portal_group", 00:05:33.814 "iscsi_delete_portal_group", 00:05:33.814 "iscsi_create_portal_group", 00:05:33.814 "iscsi_get_portal_groups", 00:05:33.814 "iscsi_delete_target_node", 00:05:33.815 "iscsi_target_node_remove_pg_ig_maps", 00:05:33.815 "iscsi_target_node_add_pg_ig_maps", 00:05:33.815 "iscsi_create_target_node", 00:05:33.815 "iscsi_get_target_nodes", 00:05:33.815 "iscsi_delete_initiator_group", 00:05:33.815 "iscsi_initiator_group_remove_initiators", 00:05:33.815 "iscsi_initiator_group_add_initiators", 00:05:33.815 "iscsi_create_initiator_group", 00:05:33.815 "iscsi_get_initiator_groups", 00:05:33.815 "nvmf_set_crdt", 00:05:33.815 "nvmf_set_config", 00:05:33.815 "nvmf_set_max_subsystems", 00:05:33.815 "nvmf_stop_mdns_prr", 00:05:33.815 "nvmf_publish_mdns_prr", 00:05:33.815 "nvmf_subsystem_get_listeners", 00:05:33.815 "nvmf_subsystem_get_qpairs", 00:05:33.815 "nvmf_subsystem_get_controllers", 00:05:33.815 "nvmf_get_stats", 00:05:33.815 "nvmf_get_transports", 00:05:33.815 "nvmf_create_transport", 00:05:33.815 "nvmf_get_targets", 00:05:33.815 "nvmf_delete_target", 00:05:33.815 "nvmf_create_target", 00:05:33.815 "nvmf_subsystem_allow_any_host", 00:05:33.815 "nvmf_subsystem_set_keys", 00:05:33.815 "nvmf_subsystem_remove_host", 00:05:33.815 "nvmf_subsystem_add_host", 00:05:33.815 "nvmf_ns_remove_host", 00:05:33.815 "nvmf_ns_add_host", 00:05:33.815 "nvmf_subsystem_remove_ns", 00:05:33.815 "nvmf_subsystem_set_ns_ana_group", 00:05:33.815 "nvmf_subsystem_add_ns", 00:05:33.815 "nvmf_subsystem_listener_set_ana_state", 00:05:33.815 "nvmf_discovery_get_referrals", 00:05:33.815 "nvmf_discovery_remove_referral", 00:05:33.815 "nvmf_discovery_add_referral", 00:05:33.815 "nvmf_subsystem_remove_listener", 00:05:33.815 "nvmf_subsystem_add_listener", 00:05:33.815 "nvmf_delete_subsystem", 00:05:33.815 "nvmf_create_subsystem", 00:05:33.815 "nvmf_get_subsystems", 00:05:33.815 "env_dpdk_get_mem_stats", 00:05:33.815 "nbd_get_disks", 00:05:33.815 "nbd_stop_disk", 00:05:33.815 "nbd_start_disk", 00:05:33.815 "ublk_recover_disk", 00:05:33.815 "ublk_get_disks", 00:05:33.815 "ublk_stop_disk", 00:05:33.815 "ublk_start_disk", 00:05:33.815 "ublk_destroy_target", 00:05:33.815 "ublk_create_target", 00:05:33.815 "virtio_blk_create_transport", 00:05:33.815 "virtio_blk_get_transports", 00:05:33.815 "vhost_controller_set_coalescing", 00:05:33.815 "vhost_get_controllers", 00:05:33.815 "vhost_delete_controller", 00:05:33.815 "vhost_create_blk_controller", 00:05:33.815 "vhost_scsi_controller_remove_target", 00:05:33.815 "vhost_scsi_controller_add_target", 00:05:33.815 "vhost_start_scsi_controller", 00:05:33.815 "vhost_create_scsi_controller", 00:05:33.815 "thread_set_cpumask", 00:05:33.815 "scheduler_set_options", 00:05:33.815 "framework_get_governor", 00:05:33.815 "framework_get_scheduler", 00:05:33.815 "framework_set_scheduler", 00:05:33.815 "framework_get_reactors", 00:05:33.815 "thread_get_io_channels", 00:05:33.815 "thread_get_pollers", 00:05:33.815 "thread_get_stats", 00:05:33.815 "framework_monitor_context_switch", 00:05:33.815 "spdk_kill_instance", 00:05:33.815 "log_enable_timestamps", 00:05:33.815 "log_get_flags", 00:05:33.815 "log_clear_flag", 00:05:33.815 "log_set_flag", 00:05:33.815 "log_get_level", 00:05:33.815 "log_set_level", 00:05:33.815 "log_get_print_level", 00:05:33.815 "log_set_print_level", 00:05:33.815 "framework_enable_cpumask_locks", 00:05:33.815 "framework_disable_cpumask_locks", 00:05:33.815 "framework_wait_init", 00:05:33.815 "framework_start_init", 00:05:33.815 "scsi_get_devices", 00:05:33.815 "bdev_get_histogram", 00:05:33.815 "bdev_enable_histogram", 00:05:33.815 "bdev_set_qos_limit", 00:05:33.815 "bdev_set_qd_sampling_period", 00:05:33.815 "bdev_get_bdevs", 00:05:33.815 "bdev_reset_iostat", 00:05:33.815 "bdev_get_iostat", 00:05:33.815 "bdev_examine", 00:05:33.815 "bdev_wait_for_examine", 00:05:33.815 "bdev_set_options", 00:05:33.815 "accel_get_stats", 00:05:33.815 "accel_set_options", 00:05:33.815 "accel_set_driver", 00:05:33.815 "accel_crypto_key_destroy", 00:05:33.815 "accel_crypto_keys_get", 00:05:33.815 "accel_crypto_key_create", 00:05:33.815 "accel_assign_opc", 00:05:33.815 "accel_get_module_info", 00:05:33.815 "accel_get_opc_assignments", 00:05:33.815 "vmd_rescan", 00:05:33.815 "vmd_remove_device", 00:05:33.815 "vmd_enable", 00:05:33.815 "sock_get_default_impl", 00:05:33.815 "sock_set_default_impl", 00:05:33.815 "sock_impl_set_options", 00:05:33.815 "sock_impl_get_options", 00:05:33.815 "iobuf_get_stats", 00:05:33.815 "iobuf_set_options", 00:05:33.815 "keyring_get_keys", 00:05:33.815 "vfu_tgt_set_base_path", 00:05:33.815 "framework_get_pci_devices", 00:05:33.815 "framework_get_config", 00:05:33.815 "framework_get_subsystems", 00:05:33.815 "fsdev_set_opts", 00:05:33.815 "fsdev_get_opts", 00:05:33.815 "trace_get_info", 00:05:33.815 "trace_get_tpoint_group_mask", 00:05:33.815 "trace_disable_tpoint_group", 00:05:33.815 "trace_enable_tpoint_group", 00:05:33.815 "trace_clear_tpoint_mask", 00:05:33.815 "trace_set_tpoint_mask", 00:05:33.815 "notify_get_notifications", 00:05:33.815 "notify_get_types", 00:05:33.815 "spdk_get_version", 00:05:33.815 "rpc_get_methods" 00:05:33.815 ] 00:05:33.815 23:49:40 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:33.815 23:49:40 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:33.815 23:49:40 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:33.815 23:49:40 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:33.815 23:49:40 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 58695 00:05:33.815 23:49:40 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 58695 ']' 00:05:33.815 23:49:40 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 58695 00:05:33.815 23:49:40 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:33.815 23:49:40 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:33.815 23:49:40 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58695 00:05:33.815 killing process with pid 58695 00:05:33.815 23:49:40 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:33.815 23:49:40 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:33.815 23:49:40 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58695' 00:05:33.815 23:49:40 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 58695 00:05:33.815 23:49:40 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 58695 00:05:35.720 ************************************ 00:05:35.720 END TEST spdkcli_tcp 00:05:35.720 ************************************ 00:05:35.720 00:05:35.720 real 0m3.421s 00:05:35.720 user 0m6.246s 00:05:35.720 sys 0m0.519s 00:05:35.720 23:49:42 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:35.720 23:49:42 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:35.720 23:49:42 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:35.720 23:49:42 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:35.720 23:49:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:35.720 23:49:42 -- common/autotest_common.sh@10 -- # set +x 00:05:35.720 ************************************ 00:05:35.720 START TEST dpdk_mem_utility 00:05:35.720 ************************************ 00:05:35.720 23:49:42 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:35.720 * Looking for test storage... 00:05:35.720 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:35.720 23:49:42 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:35.720 23:49:42 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:05:35.720 23:49:42 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:35.980 23:49:42 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:35.980 23:49:42 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:35.980 23:49:42 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:35.980 23:49:42 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:35.980 23:49:42 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:35.980 23:49:42 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:35.980 23:49:42 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:35.980 23:49:42 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:35.980 23:49:42 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:35.980 23:49:42 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:35.980 23:49:42 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:35.980 23:49:42 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:35.980 23:49:42 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:35.980 23:49:42 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:35.980 23:49:42 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:35.980 23:49:42 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:35.980 23:49:42 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:35.980 23:49:42 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:35.980 23:49:42 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:35.980 23:49:42 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:35.980 23:49:42 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:35.980 23:49:42 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:35.980 23:49:42 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:35.980 23:49:42 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:35.980 23:49:42 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:35.980 23:49:42 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:35.980 23:49:42 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:35.980 23:49:42 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:35.980 23:49:42 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:35.980 23:49:42 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:35.980 23:49:42 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:35.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.980 --rc genhtml_branch_coverage=1 00:05:35.980 --rc genhtml_function_coverage=1 00:05:35.980 --rc genhtml_legend=1 00:05:35.980 --rc geninfo_all_blocks=1 00:05:35.980 --rc geninfo_unexecuted_blocks=1 00:05:35.980 00:05:35.980 ' 00:05:35.980 23:49:42 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:35.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.980 --rc genhtml_branch_coverage=1 00:05:35.980 --rc genhtml_function_coverage=1 00:05:35.980 --rc genhtml_legend=1 00:05:35.980 --rc geninfo_all_blocks=1 00:05:35.980 --rc geninfo_unexecuted_blocks=1 00:05:35.980 00:05:35.980 ' 00:05:35.980 23:49:42 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:35.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.980 --rc genhtml_branch_coverage=1 00:05:35.980 --rc genhtml_function_coverage=1 00:05:35.980 --rc genhtml_legend=1 00:05:35.980 --rc geninfo_all_blocks=1 00:05:35.980 --rc geninfo_unexecuted_blocks=1 00:05:35.980 00:05:35.980 ' 00:05:35.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:35.980 23:49:42 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:35.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.980 --rc genhtml_branch_coverage=1 00:05:35.980 --rc genhtml_function_coverage=1 00:05:35.980 --rc genhtml_legend=1 00:05:35.980 --rc geninfo_all_blocks=1 00:05:35.980 --rc geninfo_unexecuted_blocks=1 00:05:35.980 00:05:35.980 ' 00:05:35.980 23:49:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:35.980 23:49:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58817 00:05:35.980 23:49:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58817 00:05:35.980 23:49:42 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 58817 ']' 00:05:35.980 23:49:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:35.980 23:49:42 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:35.980 23:49:42 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:35.980 23:49:42 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:35.980 23:49:42 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:35.980 23:49:42 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:35.980 [2024-11-18 23:49:42.646673] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:05:35.980 [2024-11-18 23:49:42.647096] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58817 ] 00:05:36.240 [2024-11-18 23:49:42.830064] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.241 [2024-11-18 23:49:42.916850] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.500 [2024-11-18 23:49:43.115973] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:37.071 23:49:43 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:37.071 23:49:43 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:37.071 23:49:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:37.071 23:49:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:37.071 23:49:43 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.071 23:49:43 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:37.071 { 00:05:37.071 "filename": "/tmp/spdk_mem_dump.txt" 00:05:37.071 } 00:05:37.071 23:49:43 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:37.071 23:49:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:37.071 DPDK memory size 816.000000 MiB in 1 heap(s) 00:05:37.071 1 heaps totaling size 816.000000 MiB 00:05:37.071 size: 816.000000 MiB heap id: 0 00:05:37.071 end heaps---------- 00:05:37.071 9 mempools totaling size 595.772034 MiB 00:05:37.071 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:37.071 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:37.071 size: 92.545471 MiB name: bdev_io_58817 00:05:37.071 size: 50.003479 MiB name: msgpool_58817 00:05:37.071 size: 36.509338 MiB name: fsdev_io_58817 00:05:37.071 size: 21.763794 MiB name: PDU_Pool 00:05:37.071 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:37.071 size: 4.133484 MiB name: evtpool_58817 00:05:37.071 size: 0.026123 MiB name: Session_Pool 00:05:37.071 end mempools------- 00:05:37.071 6 memzones totaling size 4.142822 MiB 00:05:37.071 size: 1.000366 MiB name: RG_ring_0_58817 00:05:37.071 size: 1.000366 MiB name: RG_ring_1_58817 00:05:37.071 size: 1.000366 MiB name: RG_ring_4_58817 00:05:37.071 size: 1.000366 MiB name: RG_ring_5_58817 00:05:37.071 size: 0.125366 MiB name: RG_ring_2_58817 00:05:37.071 size: 0.015991 MiB name: RG_ring_3_58817 00:05:37.071 end memzones------- 00:05:37.071 23:49:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:37.071 heap id: 0 total size: 816.000000 MiB number of busy elements: 315 number of free elements: 18 00:05:37.071 list of free elements. size: 16.791382 MiB 00:05:37.071 element at address: 0x200006400000 with size: 1.995972 MiB 00:05:37.071 element at address: 0x20000a600000 with size: 1.995972 MiB 00:05:37.071 element at address: 0x200003e00000 with size: 1.991028 MiB 00:05:37.071 element at address: 0x200018d00040 with size: 0.999939 MiB 00:05:37.071 element at address: 0x200019100040 with size: 0.999939 MiB 00:05:37.071 element at address: 0x200019200000 with size: 0.999084 MiB 00:05:37.071 element at address: 0x200031e00000 with size: 0.994324 MiB 00:05:37.071 element at address: 0x200000400000 with size: 0.992004 MiB 00:05:37.071 element at address: 0x200018a00000 with size: 0.959656 MiB 00:05:37.071 element at address: 0x200019500040 with size: 0.936401 MiB 00:05:37.071 element at address: 0x200000200000 with size: 0.716980 MiB 00:05:37.071 element at address: 0x20001ac00000 with size: 0.561707 MiB 00:05:37.071 element at address: 0x200000c00000 with size: 0.490173 MiB 00:05:37.071 element at address: 0x200018e00000 with size: 0.487976 MiB 00:05:37.071 element at address: 0x200019600000 with size: 0.485413 MiB 00:05:37.071 element at address: 0x200012c00000 with size: 0.443481 MiB 00:05:37.071 element at address: 0x200028000000 with size: 0.390442 MiB 00:05:37.071 element at address: 0x200000800000 with size: 0.350891 MiB 00:05:37.071 list of standard malloc elements. size: 199.287720 MiB 00:05:37.071 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:05:37.071 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:05:37.071 element at address: 0x200018bfff80 with size: 1.000183 MiB 00:05:37.071 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:05:37.071 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:05:37.071 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:37.071 element at address: 0x2000195eff40 with size: 0.062683 MiB 00:05:37.071 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:37.071 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:05:37.071 element at address: 0x2000195efdc0 with size: 0.000366 MiB 00:05:37.071 element at address: 0x200012bff040 with size: 0.000305 MiB 00:05:37.071 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:37.071 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:37.071 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:05:37.071 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:05:37.071 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:05:37.071 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:05:37.071 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:05:37.071 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:05:37.071 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:05:37.071 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:05:37.071 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:05:37.071 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:05:37.071 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:05:37.071 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:05:37.071 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:05:37.071 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:05:37.071 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:05:37.071 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:05:37.071 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:05:37.071 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:05:37.071 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:05:37.071 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:05:37.071 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:05:37.071 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:05:37.071 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:05:37.071 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:05:37.071 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:05:37.071 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:05:37.072 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:05:37.072 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:05:37.072 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:05:37.072 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:05:37.072 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:05:37.072 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:05:37.072 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:05:37.072 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:05:37.072 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:05:37.072 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:05:37.072 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:05:37.072 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:05:37.072 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:05:37.072 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:05:37.072 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:05:37.072 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:05:37.072 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:05:37.072 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:05:37.072 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:05:37.072 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:05:37.072 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:05:37.072 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:05:37.072 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:05:37.072 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:05:37.072 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:05:37.072 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:05:37.072 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:05:37.072 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:05:37.072 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:05:37.072 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:05:37.072 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:05:37.072 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:05:37.072 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:05:37.072 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:05:37.072 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:05:37.072 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:05:37.072 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:05:37.072 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:05:37.072 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:05:37.072 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:05:37.072 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:05:37.072 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:05:37.072 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:05:37.072 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:05:37.072 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:05:37.072 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:05:37.072 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:05:37.072 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:05:37.072 element at address: 0x200000cff000 with size: 0.000244 MiB 00:05:37.072 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:05:37.072 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:05:37.072 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:05:37.072 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:05:37.072 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:05:37.072 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:05:37.072 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:05:37.072 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:05:37.072 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:05:37.072 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:05:37.072 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:05:37.072 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:05:37.072 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:05:37.072 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:05:37.072 element at address: 0x200012bff180 with size: 0.000244 MiB 00:05:37.072 element at address: 0x200012bff280 with size: 0.000244 MiB 00:05:37.072 element at address: 0x200012bff380 with size: 0.000244 MiB 00:05:37.072 element at address: 0x200012bff480 with size: 0.000244 MiB 00:05:37.072 element at address: 0x200012bff580 with size: 0.000244 MiB 00:05:37.072 element at address: 0x200012bff680 with size: 0.000244 MiB 00:05:37.072 element at address: 0x200012bff780 with size: 0.000244 MiB 00:05:37.072 element at address: 0x200012bff880 with size: 0.000244 MiB 00:05:37.072 element at address: 0x200012bff980 with size: 0.000244 MiB 00:05:37.072 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:05:37.072 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:05:37.072 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:05:37.072 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:05:37.072 element at address: 0x200012c71880 with size: 0.000244 MiB 00:05:37.072 element at address: 0x200012c71980 with size: 0.000244 MiB 00:05:37.072 element at address: 0x200012c71a80 with size: 0.000244 MiB 00:05:37.072 element at address: 0x200012c71b80 with size: 0.000244 MiB 00:05:37.072 element at address: 0x200012c71c80 with size: 0.000244 MiB 00:05:37.072 element at address: 0x200012c71d80 with size: 0.000244 MiB 00:05:37.072 element at address: 0x200012c71e80 with size: 0.000244 MiB 00:05:37.072 element at address: 0x200012c71f80 with size: 0.000244 MiB 00:05:37.072 element at address: 0x200012c72080 with size: 0.000244 MiB 00:05:37.072 element at address: 0x200012c72180 with size: 0.000244 MiB 00:05:37.072 element at address: 0x200012cf24c0 with size: 0.000244 MiB 00:05:37.072 element at address: 0x200018afdd00 with size: 0.000244 MiB 00:05:37.072 element at address: 0x200018e7cec0 with size: 0.000244 MiB 00:05:37.072 element at address: 0x200018e7cfc0 with size: 0.000244 MiB 00:05:37.072 element at address: 0x200018e7d0c0 with size: 0.000244 MiB 00:05:37.072 element at address: 0x200018e7d1c0 with size: 0.000244 MiB 00:05:37.072 element at address: 0x200018e7d2c0 with size: 0.000244 MiB 00:05:37.072 element at address: 0x200018e7d3c0 with size: 0.000244 MiB 00:05:37.072 element at address: 0x200018e7d4c0 with size: 0.000244 MiB 00:05:37.072 element at address: 0x200018e7d5c0 with size: 0.000244 MiB 00:05:37.072 element at address: 0x200018e7d6c0 with size: 0.000244 MiB 00:05:37.072 element at address: 0x200018e7d7c0 with size: 0.000244 MiB 00:05:37.072 element at address: 0x200018e7d8c0 with size: 0.000244 MiB 00:05:37.072 element at address: 0x200018e7d9c0 with size: 0.000244 MiB 00:05:37.072 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:05:37.072 element at address: 0x2000192ffc40 with size: 0.000244 MiB 00:05:37.072 element at address: 0x2000195efbc0 with size: 0.000244 MiB 00:05:37.072 element at address: 0x2000195efcc0 with size: 0.000244 MiB 00:05:37.072 element at address: 0x2000196bc680 with size: 0.000244 MiB 00:05:37.072 element at address: 0x20001ac8fcc0 with size: 0.000244 MiB 00:05:37.072 element at address: 0x20001ac8fdc0 with size: 0.000244 MiB 00:05:37.072 element at address: 0x20001ac8fec0 with size: 0.000244 MiB 00:05:37.072 element at address: 0x20001ac8ffc0 with size: 0.000244 MiB 00:05:37.072 element at address: 0x20001ac900c0 with size: 0.000244 MiB 00:05:37.072 element at address: 0x20001ac901c0 with size: 0.000244 MiB 00:05:37.072 element at address: 0x20001ac902c0 with size: 0.000244 MiB 00:05:37.072 element at address: 0x20001ac903c0 with size: 0.000244 MiB 00:05:37.072 element at address: 0x20001ac904c0 with size: 0.000244 MiB 00:05:37.072 element at address: 0x20001ac905c0 with size: 0.000244 MiB 00:05:37.072 element at address: 0x20001ac906c0 with size: 0.000244 MiB 00:05:37.072 element at address: 0x20001ac907c0 with size: 0.000244 MiB 00:05:37.072 element at address: 0x20001ac908c0 with size: 0.000244 MiB 00:05:37.072 element at address: 0x20001ac909c0 with size: 0.000244 MiB 00:05:37.073 element at address: 0x20001ac90ac0 with size: 0.000244 MiB 00:05:37.073 element at address: 0x20001ac90bc0 with size: 0.000244 MiB 00:05:37.073 element at address: 0x20001ac90cc0 with size: 0.000244 MiB 00:05:37.073 element at address: 0x20001ac90dc0 with size: 0.000244 MiB 00:05:37.073 element at address: 0x20001ac90ec0 with size: 0.000244 MiB 00:05:37.073 element at address: 0x20001ac90fc0 with size: 0.000244 MiB 00:05:37.073 element at address: 0x20001ac910c0 with size: 0.000244 MiB 00:05:37.073 element at address: 0x20001ac911c0 with size: 0.000244 MiB 00:05:37.073 element at address: 0x20001ac912c0 with size: 0.000244 MiB 00:05:37.073 element at address: 0x20001ac913c0 with size: 0.000244 MiB 00:05:37.073 element at address: 0x20001ac914c0 with size: 0.000244 MiB 00:05:37.073 element at address: 0x20001ac915c0 with size: 0.000244 MiB 00:05:37.073 element at address: 0x20001ac916c0 with size: 0.000244 MiB 00:05:37.073 element at address: 0x20001ac917c0 with size: 0.000244 MiB 00:05:37.073 element at address: 0x20001ac918c0 with size: 0.000244 MiB 00:05:37.073 element at address: 0x20001ac919c0 with size: 0.000244 MiB 00:05:37.073 element at address: 0x20001ac91ac0 with size: 0.000244 MiB 00:05:37.073 element at address: 0x20001ac91bc0 with size: 0.000244 MiB 00:05:37.073 element at address: 0x20001ac91cc0 with size: 0.000244 MiB 00:05:37.073 element at address: 0x20001ac91dc0 with size: 0.000244 MiB 00:05:37.073 element at address: 0x20001ac91ec0 with size: 0.000244 MiB 00:05:37.073 element at address: 0x20001ac91fc0 with size: 0.000244 MiB 00:05:37.073 element at address: 0x20001ac920c0 with size: 0.000244 MiB 00:05:37.073 element at address: 0x20001ac921c0 with size: 0.000244 MiB 00:05:37.073 element at address: 0x20001ac922c0 with size: 0.000244 MiB 00:05:37.073 element at address: 0x20001ac923c0 with size: 0.000244 MiB 00:05:37.073 element at address: 0x20001ac924c0 with size: 0.000244 MiB 00:05:37.073 element at address: 0x20001ac925c0 with size: 0.000244 MiB 00:05:37.073 element at address: 0x20001ac926c0 with size: 0.000244 MiB 00:05:37.073 element at address: 0x20001ac927c0 with size: 0.000244 MiB 00:05:37.073 element at address: 0x20001ac928c0 with size: 0.000244 MiB 00:05:37.073 element at address: 0x20001ac929c0 with size: 0.000244 MiB 00:05:37.073 element at address: 0x20001ac92ac0 with size: 0.000244 MiB 00:05:37.073 element at address: 0x20001ac92bc0 with size: 0.000244 MiB 00:05:37.073 element at address: 0x20001ac92cc0 with size: 0.000244 MiB 00:05:37.073 element at address: 0x20001ac92dc0 with size: 0.000244 MiB 00:05:37.073 element at address: 0x20001ac92ec0 with size: 0.000244 MiB 00:05:37.073 element at address: 0x20001ac92fc0 with size: 0.000244 MiB 00:05:37.073 element at address: 0x20001ac930c0 with size: 0.000244 MiB 00:05:37.073 element at address: 0x20001ac931c0 with size: 0.000244 MiB 00:05:37.073 element at address: 0x20001ac932c0 with size: 0.000244 MiB 00:05:37.073 element at address: 0x20001ac933c0 with size: 0.000244 MiB 00:05:37.073 element at address: 0x20001ac934c0 with size: 0.000244 MiB 00:05:37.073 element at address: 0x20001ac935c0 with size: 0.000244 MiB 00:05:37.073 element at address: 0x20001ac936c0 with size: 0.000244 MiB 00:05:37.073 element at address: 0x20001ac937c0 with size: 0.000244 MiB 00:05:37.073 element at address: 0x20001ac938c0 with size: 0.000244 MiB 00:05:37.073 element at address: 0x20001ac939c0 with size: 0.000244 MiB 00:05:37.073 element at address: 0x20001ac93ac0 with size: 0.000244 MiB 00:05:37.073 element at address: 0x20001ac93bc0 with size: 0.000244 MiB 00:05:37.073 element at address: 0x20001ac93cc0 with size: 0.000244 MiB 00:05:37.073 element at address: 0x20001ac93dc0 with size: 0.000244 MiB 00:05:37.073 element at address: 0x20001ac93ec0 with size: 0.000244 MiB 00:05:37.073 element at address: 0x20001ac93fc0 with size: 0.000244 MiB 00:05:37.073 element at address: 0x20001ac940c0 with size: 0.000244 MiB 00:05:37.073 element at address: 0x20001ac941c0 with size: 0.000244 MiB 00:05:37.073 element at address: 0x20001ac942c0 with size: 0.000244 MiB 00:05:37.073 element at address: 0x20001ac943c0 with size: 0.000244 MiB 00:05:37.073 element at address: 0x20001ac944c0 with size: 0.000244 MiB 00:05:37.073 element at address: 0x20001ac945c0 with size: 0.000244 MiB 00:05:37.073 element at address: 0x20001ac946c0 with size: 0.000244 MiB 00:05:37.073 element at address: 0x20001ac947c0 with size: 0.000244 MiB 00:05:37.073 element at address: 0x20001ac948c0 with size: 0.000244 MiB 00:05:37.073 element at address: 0x20001ac949c0 with size: 0.000244 MiB 00:05:37.073 element at address: 0x20001ac94ac0 with size: 0.000244 MiB 00:05:37.073 element at address: 0x20001ac94bc0 with size: 0.000244 MiB 00:05:37.073 element at address: 0x20001ac94cc0 with size: 0.000244 MiB 00:05:37.073 element at address: 0x20001ac94dc0 with size: 0.000244 MiB 00:05:37.073 element at address: 0x20001ac94ec0 with size: 0.000244 MiB 00:05:37.073 element at address: 0x20001ac94fc0 with size: 0.000244 MiB 00:05:37.073 element at address: 0x20001ac950c0 with size: 0.000244 MiB 00:05:37.073 element at address: 0x20001ac951c0 with size: 0.000244 MiB 00:05:37.073 element at address: 0x20001ac952c0 with size: 0.000244 MiB 00:05:37.073 element at address: 0x20001ac953c0 with size: 0.000244 MiB 00:05:37.073 element at address: 0x200028063f40 with size: 0.000244 MiB 00:05:37.073 element at address: 0x200028064040 with size: 0.000244 MiB 00:05:37.073 element at address: 0x20002806ad00 with size: 0.000244 MiB 00:05:37.073 element at address: 0x20002806af80 with size: 0.000244 MiB 00:05:37.073 element at address: 0x20002806b080 with size: 0.000244 MiB 00:05:37.073 element at address: 0x20002806b180 with size: 0.000244 MiB 00:05:37.073 element at address: 0x20002806b280 with size: 0.000244 MiB 00:05:37.073 element at address: 0x20002806b380 with size: 0.000244 MiB 00:05:37.073 element at address: 0x20002806b480 with size: 0.000244 MiB 00:05:37.073 element at address: 0x20002806b580 with size: 0.000244 MiB 00:05:37.073 element at address: 0x20002806b680 with size: 0.000244 MiB 00:05:37.073 element at address: 0x20002806b780 with size: 0.000244 MiB 00:05:37.073 element at address: 0x20002806b880 with size: 0.000244 MiB 00:05:37.073 element at address: 0x20002806b980 with size: 0.000244 MiB 00:05:37.073 element at address: 0x20002806ba80 with size: 0.000244 MiB 00:05:37.073 element at address: 0x20002806bb80 with size: 0.000244 MiB 00:05:37.073 element at address: 0x20002806bc80 with size: 0.000244 MiB 00:05:37.073 element at address: 0x20002806bd80 with size: 0.000244 MiB 00:05:37.073 element at address: 0x20002806be80 with size: 0.000244 MiB 00:05:37.073 element at address: 0x20002806bf80 with size: 0.000244 MiB 00:05:37.073 element at address: 0x20002806c080 with size: 0.000244 MiB 00:05:37.073 element at address: 0x20002806c180 with size: 0.000244 MiB 00:05:37.073 element at address: 0x20002806c280 with size: 0.000244 MiB 00:05:37.073 element at address: 0x20002806c380 with size: 0.000244 MiB 00:05:37.073 element at address: 0x20002806c480 with size: 0.000244 MiB 00:05:37.073 element at address: 0x20002806c580 with size: 0.000244 MiB 00:05:37.073 element at address: 0x20002806c680 with size: 0.000244 MiB 00:05:37.073 element at address: 0x20002806c780 with size: 0.000244 MiB 00:05:37.073 element at address: 0x20002806c880 with size: 0.000244 MiB 00:05:37.073 element at address: 0x20002806c980 with size: 0.000244 MiB 00:05:37.073 element at address: 0x20002806ca80 with size: 0.000244 MiB 00:05:37.073 element at address: 0x20002806cb80 with size: 0.000244 MiB 00:05:37.073 element at address: 0x20002806cc80 with size: 0.000244 MiB 00:05:37.073 element at address: 0x20002806cd80 with size: 0.000244 MiB 00:05:37.073 element at address: 0x20002806ce80 with size: 0.000244 MiB 00:05:37.073 element at address: 0x20002806cf80 with size: 0.000244 MiB 00:05:37.073 element at address: 0x20002806d080 with size: 0.000244 MiB 00:05:37.073 element at address: 0x20002806d180 with size: 0.000244 MiB 00:05:37.073 element at address: 0x20002806d280 with size: 0.000244 MiB 00:05:37.073 element at address: 0x20002806d380 with size: 0.000244 MiB 00:05:37.073 element at address: 0x20002806d480 with size: 0.000244 MiB 00:05:37.073 element at address: 0x20002806d580 with size: 0.000244 MiB 00:05:37.073 element at address: 0x20002806d680 with size: 0.000244 MiB 00:05:37.073 element at address: 0x20002806d780 with size: 0.000244 MiB 00:05:37.073 element at address: 0x20002806d880 with size: 0.000244 MiB 00:05:37.073 element at address: 0x20002806d980 with size: 0.000244 MiB 00:05:37.073 element at address: 0x20002806da80 with size: 0.000244 MiB 00:05:37.073 element at address: 0x20002806db80 with size: 0.000244 MiB 00:05:37.073 element at address: 0x20002806dc80 with size: 0.000244 MiB 00:05:37.073 element at address: 0x20002806dd80 with size: 0.000244 MiB 00:05:37.073 element at address: 0x20002806de80 with size: 0.000244 MiB 00:05:37.073 element at address: 0x20002806df80 with size: 0.000244 MiB 00:05:37.074 element at address: 0x20002806e080 with size: 0.000244 MiB 00:05:37.074 element at address: 0x20002806e180 with size: 0.000244 MiB 00:05:37.074 element at address: 0x20002806e280 with size: 0.000244 MiB 00:05:37.074 element at address: 0x20002806e380 with size: 0.000244 MiB 00:05:37.074 element at address: 0x20002806e480 with size: 0.000244 MiB 00:05:37.074 element at address: 0x20002806e580 with size: 0.000244 MiB 00:05:37.074 element at address: 0x20002806e680 with size: 0.000244 MiB 00:05:37.074 element at address: 0x20002806e780 with size: 0.000244 MiB 00:05:37.074 element at address: 0x20002806e880 with size: 0.000244 MiB 00:05:37.074 element at address: 0x20002806e980 with size: 0.000244 MiB 00:05:37.074 element at address: 0x20002806ea80 with size: 0.000244 MiB 00:05:37.074 element at address: 0x20002806eb80 with size: 0.000244 MiB 00:05:37.074 element at address: 0x20002806ec80 with size: 0.000244 MiB 00:05:37.074 element at address: 0x20002806ed80 with size: 0.000244 MiB 00:05:37.074 element at address: 0x20002806ee80 with size: 0.000244 MiB 00:05:37.074 element at address: 0x20002806ef80 with size: 0.000244 MiB 00:05:37.074 element at address: 0x20002806f080 with size: 0.000244 MiB 00:05:37.074 element at address: 0x20002806f180 with size: 0.000244 MiB 00:05:37.074 element at address: 0x20002806f280 with size: 0.000244 MiB 00:05:37.074 element at address: 0x20002806f380 with size: 0.000244 MiB 00:05:37.074 element at address: 0x20002806f480 with size: 0.000244 MiB 00:05:37.074 element at address: 0x20002806f580 with size: 0.000244 MiB 00:05:37.074 element at address: 0x20002806f680 with size: 0.000244 MiB 00:05:37.074 element at address: 0x20002806f780 with size: 0.000244 MiB 00:05:37.074 element at address: 0x20002806f880 with size: 0.000244 MiB 00:05:37.074 element at address: 0x20002806f980 with size: 0.000244 MiB 00:05:37.074 element at address: 0x20002806fa80 with size: 0.000244 MiB 00:05:37.074 element at address: 0x20002806fb80 with size: 0.000244 MiB 00:05:37.074 element at address: 0x20002806fc80 with size: 0.000244 MiB 00:05:37.074 element at address: 0x20002806fd80 with size: 0.000244 MiB 00:05:37.074 element at address: 0x20002806fe80 with size: 0.000244 MiB 00:05:37.074 list of memzone associated elements. size: 599.920898 MiB 00:05:37.074 element at address: 0x20001ac954c0 with size: 211.416809 MiB 00:05:37.074 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:37.074 element at address: 0x20002806ff80 with size: 157.562622 MiB 00:05:37.074 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:37.074 element at address: 0x200012df4740 with size: 92.045105 MiB 00:05:37.074 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_58817_0 00:05:37.074 element at address: 0x200000dff340 with size: 48.003113 MiB 00:05:37.074 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58817_0 00:05:37.074 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:05:37.074 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58817_0 00:05:37.074 element at address: 0x2000197be900 with size: 20.255615 MiB 00:05:37.074 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:37.074 element at address: 0x200031ffeb00 with size: 18.005127 MiB 00:05:37.074 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:37.074 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:05:37.074 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58817_0 00:05:37.074 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:05:37.074 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58817 00:05:37.074 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:37.074 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58817 00:05:37.074 element at address: 0x200018efde00 with size: 1.008179 MiB 00:05:37.074 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:37.074 element at address: 0x2000196bc780 with size: 1.008179 MiB 00:05:37.074 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:37.074 element at address: 0x200018afde00 with size: 1.008179 MiB 00:05:37.074 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:37.074 element at address: 0x200012cf25c0 with size: 1.008179 MiB 00:05:37.074 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:37.074 element at address: 0x200000cff100 with size: 1.000549 MiB 00:05:37.074 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58817 00:05:37.074 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:05:37.074 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58817 00:05:37.074 element at address: 0x2000192ffd40 with size: 1.000549 MiB 00:05:37.074 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58817 00:05:37.074 element at address: 0x200031efe8c0 with size: 1.000549 MiB 00:05:37.074 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58817 00:05:37.074 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:05:37.074 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58817 00:05:37.074 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:05:37.074 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58817 00:05:37.074 element at address: 0x200018e7dac0 with size: 0.500549 MiB 00:05:37.074 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:37.074 element at address: 0x200012c72280 with size: 0.500549 MiB 00:05:37.074 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:37.074 element at address: 0x20001967c440 with size: 0.250549 MiB 00:05:37.074 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:37.074 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:05:37.074 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58817 00:05:37.074 element at address: 0x20000085df80 with size: 0.125549 MiB 00:05:37.074 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58817 00:05:37.074 element at address: 0x200018af5ac0 with size: 0.031799 MiB 00:05:37.074 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:37.074 element at address: 0x200028064140 with size: 0.023804 MiB 00:05:37.074 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:37.074 element at address: 0x200000859d40 with size: 0.016174 MiB 00:05:37.074 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58817 00:05:37.074 element at address: 0x20002806a2c0 with size: 0.002502 MiB 00:05:37.074 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:37.074 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:05:37.074 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58817 00:05:37.074 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:05:37.074 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58817 00:05:37.074 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:05:37.074 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58817 00:05:37.074 element at address: 0x20002806ae00 with size: 0.000366 MiB 00:05:37.074 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:37.074 23:49:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:37.074 23:49:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58817 00:05:37.074 23:49:43 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 58817 ']' 00:05:37.074 23:49:43 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 58817 00:05:37.334 23:49:43 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:37.334 23:49:43 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:37.334 23:49:43 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58817 00:05:37.334 killing process with pid 58817 00:05:37.334 23:49:43 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:37.334 23:49:43 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:37.334 23:49:43 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58817' 00:05:37.334 23:49:43 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 58817 00:05:37.334 23:49:43 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 58817 00:05:39.244 00:05:39.244 real 0m3.201s 00:05:39.244 user 0m3.349s 00:05:39.244 sys 0m0.476s 00:05:39.244 ************************************ 00:05:39.244 END TEST dpdk_mem_utility 00:05:39.244 ************************************ 00:05:39.244 23:49:45 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:39.244 23:49:45 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:39.244 23:49:45 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:39.244 23:49:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:39.244 23:49:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:39.244 23:49:45 -- common/autotest_common.sh@10 -- # set +x 00:05:39.244 ************************************ 00:05:39.244 START TEST event 00:05:39.244 ************************************ 00:05:39.244 23:49:45 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:39.244 * Looking for test storage... 00:05:39.244 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:39.244 23:49:45 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:39.244 23:49:45 event -- common/autotest_common.sh@1693 -- # lcov --version 00:05:39.244 23:49:45 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:39.244 23:49:45 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:39.244 23:49:45 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:39.244 23:49:45 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:39.244 23:49:45 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:39.244 23:49:45 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:39.244 23:49:45 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:39.244 23:49:45 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:39.244 23:49:45 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:39.244 23:49:45 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:39.244 23:49:45 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:39.244 23:49:45 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:39.244 23:49:45 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:39.244 23:49:45 event -- scripts/common.sh@344 -- # case "$op" in 00:05:39.244 23:49:45 event -- scripts/common.sh@345 -- # : 1 00:05:39.244 23:49:45 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:39.244 23:49:45 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:39.244 23:49:45 event -- scripts/common.sh@365 -- # decimal 1 00:05:39.244 23:49:45 event -- scripts/common.sh@353 -- # local d=1 00:05:39.244 23:49:45 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:39.244 23:49:45 event -- scripts/common.sh@355 -- # echo 1 00:05:39.244 23:49:45 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:39.244 23:49:45 event -- scripts/common.sh@366 -- # decimal 2 00:05:39.244 23:49:45 event -- scripts/common.sh@353 -- # local d=2 00:05:39.244 23:49:45 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:39.244 23:49:45 event -- scripts/common.sh@355 -- # echo 2 00:05:39.244 23:49:45 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:39.244 23:49:45 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:39.244 23:49:45 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:39.244 23:49:45 event -- scripts/common.sh@368 -- # return 0 00:05:39.244 23:49:45 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:39.244 23:49:45 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:39.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.244 --rc genhtml_branch_coverage=1 00:05:39.244 --rc genhtml_function_coverage=1 00:05:39.244 --rc genhtml_legend=1 00:05:39.244 --rc geninfo_all_blocks=1 00:05:39.244 --rc geninfo_unexecuted_blocks=1 00:05:39.244 00:05:39.244 ' 00:05:39.244 23:49:45 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:39.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.244 --rc genhtml_branch_coverage=1 00:05:39.244 --rc genhtml_function_coverage=1 00:05:39.244 --rc genhtml_legend=1 00:05:39.244 --rc geninfo_all_blocks=1 00:05:39.244 --rc geninfo_unexecuted_blocks=1 00:05:39.244 00:05:39.244 ' 00:05:39.244 23:49:45 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:39.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.244 --rc genhtml_branch_coverage=1 00:05:39.244 --rc genhtml_function_coverage=1 00:05:39.244 --rc genhtml_legend=1 00:05:39.244 --rc geninfo_all_blocks=1 00:05:39.244 --rc geninfo_unexecuted_blocks=1 00:05:39.244 00:05:39.244 ' 00:05:39.244 23:49:45 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:39.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.244 --rc genhtml_branch_coverage=1 00:05:39.244 --rc genhtml_function_coverage=1 00:05:39.244 --rc genhtml_legend=1 00:05:39.244 --rc geninfo_all_blocks=1 00:05:39.244 --rc geninfo_unexecuted_blocks=1 00:05:39.244 00:05:39.244 ' 00:05:39.244 23:49:45 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:39.244 23:49:45 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:39.244 23:49:45 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:39.244 23:49:45 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:39.245 23:49:45 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:39.245 23:49:45 event -- common/autotest_common.sh@10 -- # set +x 00:05:39.245 ************************************ 00:05:39.245 START TEST event_perf 00:05:39.245 ************************************ 00:05:39.245 23:49:45 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:39.245 Running I/O for 1 seconds...[2024-11-18 23:49:45.795508] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:05:39.245 [2024-11-18 23:49:45.795816] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58914 ] 00:05:39.504 [2024-11-18 23:49:45.972265] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:39.504 [2024-11-18 23:49:46.063937] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:39.504 [2024-11-18 23:49:46.064111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:39.504 [2024-11-18 23:49:46.064165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.504 Running I/O for 1 seconds...[2024-11-18 23:49:46.064174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:40.884 00:05:40.884 lcore 0: 192754 00:05:40.884 lcore 1: 192754 00:05:40.884 lcore 2: 192755 00:05:40.884 lcore 3: 192753 00:05:40.884 done. 00:05:40.884 00:05:40.884 real 0m1.504s 00:05:40.884 user 0m4.272s 00:05:40.884 sys 0m0.107s 00:05:40.884 23:49:47 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:40.884 23:49:47 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:40.884 ************************************ 00:05:40.884 END TEST event_perf 00:05:40.884 ************************************ 00:05:40.884 23:49:47 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:40.884 23:49:47 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:40.884 23:49:47 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:40.884 23:49:47 event -- common/autotest_common.sh@10 -- # set +x 00:05:40.884 ************************************ 00:05:40.884 START TEST event_reactor 00:05:40.884 ************************************ 00:05:40.884 23:49:47 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:40.884 [2024-11-18 23:49:47.347265] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:05:40.884 [2024-11-18 23:49:47.347588] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58948 ] 00:05:40.884 [2024-11-18 23:49:47.530466] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.143 [2024-11-18 23:49:47.617934] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.523 test_start 00:05:42.523 oneshot 00:05:42.523 tick 100 00:05:42.523 tick 100 00:05:42.523 tick 250 00:05:42.523 tick 100 00:05:42.523 tick 100 00:05:42.523 tick 100 00:05:42.524 tick 250 00:05:42.524 tick 500 00:05:42.524 tick 100 00:05:42.524 tick 100 00:05:42.524 tick 250 00:05:42.524 tick 100 00:05:42.524 tick 100 00:05:42.524 test_end 00:05:42.524 00:05:42.524 real 0m1.504s 00:05:42.524 user 0m1.292s 00:05:42.524 sys 0m0.105s 00:05:42.524 ************************************ 00:05:42.524 END TEST event_reactor 00:05:42.524 ************************************ 00:05:42.524 23:49:48 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:42.524 23:49:48 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:42.524 23:49:48 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:42.524 23:49:48 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:42.524 23:49:48 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:42.524 23:49:48 event -- common/autotest_common.sh@10 -- # set +x 00:05:42.524 ************************************ 00:05:42.524 START TEST event_reactor_perf 00:05:42.524 ************************************ 00:05:42.524 23:49:48 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:42.524 [2024-11-18 23:49:48.905170] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:05:42.524 [2024-11-18 23:49:48.905320] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58990 ] 00:05:42.524 [2024-11-18 23:49:49.078831] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.524 [2024-11-18 23:49:49.158917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.899 test_start 00:05:43.899 test_end 00:05:43.899 Performance: 337993 events per second 00:05:43.899 00:05:43.899 real 0m1.481s 00:05:43.899 user 0m1.292s 00:05:43.899 sys 0m0.080s 00:05:43.899 23:49:50 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:43.899 23:49:50 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:43.899 ************************************ 00:05:43.899 END TEST event_reactor_perf 00:05:43.899 ************************************ 00:05:43.899 23:49:50 event -- event/event.sh@49 -- # uname -s 00:05:43.899 23:49:50 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:43.899 23:49:50 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:43.899 23:49:50 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:43.899 23:49:50 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:43.899 23:49:50 event -- common/autotest_common.sh@10 -- # set +x 00:05:43.899 ************************************ 00:05:43.899 START TEST event_scheduler 00:05:43.899 ************************************ 00:05:43.899 23:49:50 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:43.899 * Looking for test storage... 00:05:43.899 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:43.899 23:49:50 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:43.899 23:49:50 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:05:43.899 23:49:50 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:43.899 23:49:50 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:43.899 23:49:50 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:43.899 23:49:50 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:43.899 23:49:50 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:43.899 23:49:50 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:43.899 23:49:50 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:43.899 23:49:50 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:43.899 23:49:50 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:43.899 23:49:50 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:43.899 23:49:50 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:43.899 23:49:50 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:43.899 23:49:50 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:43.899 23:49:50 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:43.899 23:49:50 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:43.899 23:49:50 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:43.899 23:49:50 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:43.899 23:49:50 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:43.899 23:49:50 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:43.899 23:49:50 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:43.899 23:49:50 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:43.899 23:49:50 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:43.899 23:49:50 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:44.158 23:49:50 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:44.158 23:49:50 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:44.158 23:49:50 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:44.158 23:49:50 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:44.158 23:49:50 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:44.158 23:49:50 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:44.158 23:49:50 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:44.158 23:49:50 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:44.158 23:49:50 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:44.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.158 --rc genhtml_branch_coverage=1 00:05:44.158 --rc genhtml_function_coverage=1 00:05:44.158 --rc genhtml_legend=1 00:05:44.158 --rc geninfo_all_blocks=1 00:05:44.158 --rc geninfo_unexecuted_blocks=1 00:05:44.158 00:05:44.158 ' 00:05:44.158 23:49:50 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:44.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.158 --rc genhtml_branch_coverage=1 00:05:44.158 --rc genhtml_function_coverage=1 00:05:44.158 --rc genhtml_legend=1 00:05:44.158 --rc geninfo_all_blocks=1 00:05:44.158 --rc geninfo_unexecuted_blocks=1 00:05:44.158 00:05:44.158 ' 00:05:44.158 23:49:50 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:44.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.158 --rc genhtml_branch_coverage=1 00:05:44.158 --rc genhtml_function_coverage=1 00:05:44.158 --rc genhtml_legend=1 00:05:44.158 --rc geninfo_all_blocks=1 00:05:44.158 --rc geninfo_unexecuted_blocks=1 00:05:44.158 00:05:44.158 ' 00:05:44.158 23:49:50 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:44.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.158 --rc genhtml_branch_coverage=1 00:05:44.158 --rc genhtml_function_coverage=1 00:05:44.158 --rc genhtml_legend=1 00:05:44.158 --rc geninfo_all_blocks=1 00:05:44.158 --rc geninfo_unexecuted_blocks=1 00:05:44.158 00:05:44.158 ' 00:05:44.158 23:49:50 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:44.158 23:49:50 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=59061 00:05:44.158 23:49:50 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:44.158 23:49:50 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 59061 00:05:44.158 23:49:50 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:44.158 23:49:50 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 59061 ']' 00:05:44.158 23:49:50 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.158 23:49:50 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:44.158 23:49:50 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.158 23:49:50 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:44.158 23:49:50 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:44.158 [2024-11-18 23:49:50.702129] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:05:44.158 [2024-11-18 23:49:50.702502] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59061 ] 00:05:44.417 [2024-11-18 23:49:50.888719] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:44.417 [2024-11-18 23:49:51.021492] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.417 [2024-11-18 23:49:51.021645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:44.417 [2024-11-18 23:49:51.021761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:44.417 [2024-11-18 23:49:51.021769] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:44.986 23:49:51 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:44.986 23:49:51 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:44.986 23:49:51 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:44.986 23:49:51 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:44.986 23:49:51 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:44.986 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:44.986 POWER: Cannot set governor of lcore 0 to userspace 00:05:44.986 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:44.986 POWER: Cannot set governor of lcore 0 to performance 00:05:44.986 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:44.986 POWER: Cannot set governor of lcore 0 to userspace 00:05:44.986 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:44.986 POWER: Cannot set governor of lcore 0 to userspace 00:05:44.986 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:44.986 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:44.986 POWER: Unable to set Power Management Environment for lcore 0 00:05:44.986 [2024-11-18 23:49:51.644738] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:05:44.986 [2024-11-18 23:49:51.644760] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:05:44.986 [2024-11-18 23:49:51.644774] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:44.986 [2024-11-18 23:49:51.644796] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:44.986 [2024-11-18 23:49:51.644807] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:44.986 [2024-11-18 23:49:51.644820] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:44.986 23:49:51 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:44.986 23:49:51 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:44.986 23:49:51 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:44.986 23:49:51 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:45.246 [2024-11-18 23:49:51.808280] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:45.246 [2024-11-18 23:49:51.886745] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:45.246 23:49:51 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:45.246 23:49:51 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:45.246 23:49:51 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:45.246 23:49:51 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:45.246 23:49:51 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:45.246 ************************************ 00:05:45.246 START TEST scheduler_create_thread 00:05:45.246 ************************************ 00:05:45.246 23:49:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:45.246 23:49:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:45.246 23:49:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:45.246 23:49:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:45.246 2 00:05:45.246 23:49:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:45.246 23:49:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:45.246 23:49:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:45.246 23:49:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:45.246 3 00:05:45.246 23:49:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:45.246 23:49:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:45.246 23:49:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:45.246 23:49:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:45.246 4 00:05:45.246 23:49:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:45.246 23:49:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:45.246 23:49:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:45.246 23:49:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:45.505 5 00:05:45.505 23:49:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:45.505 23:49:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:45.505 23:49:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:45.505 23:49:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:45.505 6 00:05:45.505 23:49:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:45.505 23:49:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:45.505 23:49:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:45.505 23:49:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:45.505 7 00:05:45.505 23:49:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:45.506 23:49:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:45.506 23:49:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:45.506 23:49:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:45.506 8 00:05:45.506 23:49:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:45.506 23:49:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:45.506 23:49:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:45.506 23:49:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:45.506 9 00:05:45.506 23:49:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:45.506 23:49:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:45.506 23:49:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:45.506 23:49:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:45.506 10 00:05:45.506 23:49:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:45.506 23:49:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:45.506 23:49:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:45.506 23:49:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:45.506 23:49:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:45.506 23:49:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:45.506 23:49:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:45.506 23:49:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:45.506 23:49:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:45.506 23:49:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:45.506 23:49:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:45.506 23:49:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:45.506 23:49:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:46.929 23:49:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:46.929 23:49:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:46.929 23:49:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:46.929 23:49:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:46.929 23:49:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:47.880 ************************************ 00:05:47.880 END TEST scheduler_create_thread 00:05:47.880 ************************************ 00:05:47.880 23:49:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:47.880 00:05:47.880 real 0m2.622s 00:05:47.880 user 0m0.018s 00:05:47.880 sys 0m0.002s 00:05:47.880 23:49:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:47.880 23:49:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:47.880 23:49:54 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:47.880 23:49:54 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 59061 00:05:47.880 23:49:54 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 59061 ']' 00:05:47.880 23:49:54 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 59061 00:05:47.880 23:49:54 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:47.880 23:49:54 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:47.880 23:49:54 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59061 00:05:48.139 killing process with pid 59061 00:05:48.139 23:49:54 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:48.139 23:49:54 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:48.139 23:49:54 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59061' 00:05:48.139 23:49:54 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 59061 00:05:48.139 23:49:54 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 59061 00:05:48.399 [2024-11-18 23:49:55.000835] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:49.337 00:05:49.337 real 0m5.449s 00:05:49.337 user 0m9.631s 00:05:49.337 sys 0m0.406s 00:05:49.337 23:49:55 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:49.337 ************************************ 00:05:49.337 END TEST event_scheduler 00:05:49.337 ************************************ 00:05:49.337 23:49:55 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:49.337 23:49:55 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:49.337 23:49:55 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:49.337 23:49:55 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:49.337 23:49:55 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:49.337 23:49:55 event -- common/autotest_common.sh@10 -- # set +x 00:05:49.337 ************************************ 00:05:49.337 START TEST app_repeat 00:05:49.337 ************************************ 00:05:49.337 23:49:55 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:49.337 23:49:55 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.337 23:49:55 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.337 23:49:55 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:49.337 23:49:55 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:49.337 23:49:55 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:49.337 23:49:55 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:49.337 23:49:55 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:49.337 23:49:55 event.app_repeat -- event/event.sh@19 -- # repeat_pid=59172 00:05:49.337 23:49:55 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:49.337 23:49:55 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:49.337 23:49:55 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 59172' 00:05:49.337 Process app_repeat pid: 59172 00:05:49.337 spdk_app_start Round 0 00:05:49.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:49.337 23:49:55 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:49.337 23:49:55 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:49.337 23:49:55 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59172 /var/tmp/spdk-nbd.sock 00:05:49.337 23:49:55 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59172 ']' 00:05:49.337 23:49:55 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:49.337 23:49:55 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:49.337 23:49:55 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:49.337 23:49:55 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:49.337 23:49:55 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:49.337 [2024-11-18 23:49:55.974781] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:05:49.337 [2024-11-18 23:49:55.974950] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59172 ] 00:05:49.596 [2024-11-18 23:49:56.152376] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:49.596 [2024-11-18 23:49:56.238342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.597 [2024-11-18 23:49:56.238357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:49.856 [2024-11-18 23:49:56.389430] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:50.424 23:49:57 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:50.424 23:49:57 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:50.424 23:49:57 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:50.683 Malloc0 00:05:50.683 23:49:57 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:50.944 Malloc1 00:05:50.944 23:49:57 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:50.944 23:49:57 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.944 23:49:57 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:50.944 23:49:57 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:50.944 23:49:57 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:50.944 23:49:57 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:50.944 23:49:57 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:50.944 23:49:57 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.944 23:49:57 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:50.944 23:49:57 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:50.944 23:49:57 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:50.944 23:49:57 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:50.944 23:49:57 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:50.944 23:49:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:50.944 23:49:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:50.944 23:49:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:51.204 /dev/nbd0 00:05:51.204 23:49:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:51.204 23:49:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:51.204 23:49:57 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:51.204 23:49:57 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:51.204 23:49:57 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:51.204 23:49:57 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:51.204 23:49:57 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:51.204 23:49:57 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:51.204 23:49:57 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:51.204 23:49:57 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:51.204 23:49:57 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:51.204 1+0 records in 00:05:51.204 1+0 records out 00:05:51.204 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000466022 s, 8.8 MB/s 00:05:51.204 23:49:57 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:51.204 23:49:57 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:51.204 23:49:57 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:51.204 23:49:57 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:51.204 23:49:57 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:51.204 23:49:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:51.204 23:49:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:51.204 23:49:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:51.463 /dev/nbd1 00:05:51.464 23:49:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:51.464 23:49:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:51.464 23:49:58 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:51.464 23:49:58 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:51.464 23:49:58 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:51.464 23:49:58 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:51.464 23:49:58 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:51.464 23:49:58 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:51.464 23:49:58 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:51.464 23:49:58 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:51.464 23:49:58 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:51.464 1+0 records in 00:05:51.464 1+0 records out 00:05:51.464 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000390637 s, 10.5 MB/s 00:05:51.464 23:49:58 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:51.723 23:49:58 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:51.723 23:49:58 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:51.723 23:49:58 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:51.723 23:49:58 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:51.723 23:49:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:51.723 23:49:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:51.723 23:49:58 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:51.723 23:49:58 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.723 23:49:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:51.723 23:49:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:51.723 { 00:05:51.723 "nbd_device": "/dev/nbd0", 00:05:51.723 "bdev_name": "Malloc0" 00:05:51.723 }, 00:05:51.723 { 00:05:51.723 "nbd_device": "/dev/nbd1", 00:05:51.723 "bdev_name": "Malloc1" 00:05:51.723 } 00:05:51.723 ]' 00:05:51.723 23:49:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:51.723 { 00:05:51.723 "nbd_device": "/dev/nbd0", 00:05:51.723 "bdev_name": "Malloc0" 00:05:51.723 }, 00:05:51.723 { 00:05:51.723 "nbd_device": "/dev/nbd1", 00:05:51.723 "bdev_name": "Malloc1" 00:05:51.723 } 00:05:51.723 ]' 00:05:51.723 23:49:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:51.983 23:49:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:51.983 /dev/nbd1' 00:05:51.983 23:49:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:51.983 23:49:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:51.983 /dev/nbd1' 00:05:51.983 23:49:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:51.983 23:49:58 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:51.983 23:49:58 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:51.983 23:49:58 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:51.983 23:49:58 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:51.983 23:49:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.983 23:49:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:51.983 23:49:58 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:51.983 23:49:58 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:51.983 23:49:58 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:51.983 23:49:58 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:51.983 256+0 records in 00:05:51.983 256+0 records out 00:05:51.983 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.006149 s, 171 MB/s 00:05:51.983 23:49:58 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:51.983 23:49:58 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:51.983 256+0 records in 00:05:51.983 256+0 records out 00:05:51.983 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.027304 s, 38.4 MB/s 00:05:51.983 23:49:58 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:51.983 23:49:58 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:51.983 256+0 records in 00:05:51.983 256+0 records out 00:05:51.983 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0284872 s, 36.8 MB/s 00:05:51.983 23:49:58 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:51.983 23:49:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.983 23:49:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:51.983 23:49:58 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:51.983 23:49:58 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:51.983 23:49:58 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:51.983 23:49:58 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:51.983 23:49:58 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:51.983 23:49:58 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:51.983 23:49:58 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:51.983 23:49:58 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:51.983 23:49:58 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:51.983 23:49:58 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:51.983 23:49:58 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.983 23:49:58 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.983 23:49:58 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:51.983 23:49:58 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:51.983 23:49:58 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:51.983 23:49:58 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:52.242 23:49:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:52.242 23:49:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:52.242 23:49:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:52.242 23:49:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:52.242 23:49:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:52.242 23:49:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:52.242 23:49:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:52.242 23:49:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:52.242 23:49:58 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:52.242 23:49:58 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:52.501 23:49:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:52.501 23:49:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:52.501 23:49:59 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:52.501 23:49:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:52.501 23:49:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:52.501 23:49:59 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:52.501 23:49:59 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:52.501 23:49:59 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:52.501 23:49:59 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:52.501 23:49:59 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:52.501 23:49:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:52.760 23:49:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:52.760 23:49:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:52.760 23:49:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:52.760 23:49:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:52.760 23:49:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:52.760 23:49:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:52.760 23:49:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:52.760 23:49:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:52.760 23:49:59 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:52.760 23:49:59 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:52.761 23:49:59 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:52.761 23:49:59 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:52.761 23:49:59 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:53.329 23:49:59 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:54.273 [2024-11-18 23:50:00.673475] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:54.273 [2024-11-18 23:50:00.753311] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:54.273 [2024-11-18 23:50:00.753320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.273 [2024-11-18 23:50:00.903662] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:54.273 [2024-11-18 23:50:00.903794] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:54.273 [2024-11-18 23:50:00.903820] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:56.176 23:50:02 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:56.176 23:50:02 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:56.176 spdk_app_start Round 1 00:05:56.176 23:50:02 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59172 /var/tmp/spdk-nbd.sock 00:05:56.176 23:50:02 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59172 ']' 00:05:56.176 23:50:02 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:56.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:56.176 23:50:02 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:56.176 23:50:02 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:56.176 23:50:02 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:56.176 23:50:02 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:56.436 23:50:03 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:56.436 23:50:03 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:56.436 23:50:03 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:56.695 Malloc0 00:05:56.954 23:50:03 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:57.213 Malloc1 00:05:57.213 23:50:03 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:57.213 23:50:03 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:57.213 23:50:03 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:57.213 23:50:03 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:57.213 23:50:03 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:57.213 23:50:03 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:57.213 23:50:03 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:57.213 23:50:03 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:57.213 23:50:03 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:57.213 23:50:03 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:57.213 23:50:03 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:57.213 23:50:03 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:57.213 23:50:03 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:57.213 23:50:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:57.213 23:50:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:57.213 23:50:03 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:57.213 /dev/nbd0 00:05:57.472 23:50:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:57.472 23:50:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:57.472 23:50:03 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:57.472 23:50:03 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:57.472 23:50:03 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:57.472 23:50:03 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:57.472 23:50:03 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:57.472 23:50:03 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:57.472 23:50:03 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:57.472 23:50:03 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:57.472 23:50:03 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:57.472 1+0 records in 00:05:57.472 1+0 records out 00:05:57.472 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000204673 s, 20.0 MB/s 00:05:57.472 23:50:03 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:57.472 23:50:03 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:57.473 23:50:03 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:57.473 23:50:03 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:57.473 23:50:03 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:57.473 23:50:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:57.473 23:50:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:57.473 23:50:03 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:57.732 /dev/nbd1 00:05:57.732 23:50:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:57.732 23:50:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:57.732 23:50:04 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:57.732 23:50:04 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:57.732 23:50:04 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:57.732 23:50:04 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:57.732 23:50:04 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:57.732 23:50:04 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:57.732 23:50:04 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:57.732 23:50:04 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:57.732 23:50:04 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:57.732 1+0 records in 00:05:57.732 1+0 records out 00:05:57.732 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000282803 s, 14.5 MB/s 00:05:57.732 23:50:04 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:57.732 23:50:04 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:57.732 23:50:04 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:57.732 23:50:04 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:57.732 23:50:04 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:57.732 23:50:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:57.732 23:50:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:57.732 23:50:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:57.732 23:50:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:57.732 23:50:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:57.991 23:50:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:57.991 { 00:05:57.991 "nbd_device": "/dev/nbd0", 00:05:57.991 "bdev_name": "Malloc0" 00:05:57.991 }, 00:05:57.991 { 00:05:57.991 "nbd_device": "/dev/nbd1", 00:05:57.991 "bdev_name": "Malloc1" 00:05:57.991 } 00:05:57.991 ]' 00:05:57.991 23:50:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:57.991 { 00:05:57.991 "nbd_device": "/dev/nbd0", 00:05:57.991 "bdev_name": "Malloc0" 00:05:57.991 }, 00:05:57.991 { 00:05:57.991 "nbd_device": "/dev/nbd1", 00:05:57.991 "bdev_name": "Malloc1" 00:05:57.991 } 00:05:57.991 ]' 00:05:57.991 23:50:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:57.991 23:50:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:57.991 /dev/nbd1' 00:05:57.991 23:50:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:57.991 /dev/nbd1' 00:05:57.991 23:50:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:57.991 23:50:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:57.991 23:50:04 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:57.991 23:50:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:57.991 23:50:04 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:57.991 23:50:04 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:57.991 23:50:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:57.991 23:50:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:57.991 23:50:04 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:57.991 23:50:04 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:57.991 23:50:04 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:57.991 23:50:04 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:57.991 256+0 records in 00:05:57.991 256+0 records out 00:05:57.991 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00808384 s, 130 MB/s 00:05:57.991 23:50:04 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:57.991 23:50:04 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:57.991 256+0 records in 00:05:57.991 256+0 records out 00:05:57.991 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0241603 s, 43.4 MB/s 00:05:57.991 23:50:04 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:57.991 23:50:04 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:58.251 256+0 records in 00:05:58.251 256+0 records out 00:05:58.251 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0325987 s, 32.2 MB/s 00:05:58.251 23:50:04 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:58.251 23:50:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:58.251 23:50:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:58.251 23:50:04 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:58.251 23:50:04 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:58.251 23:50:04 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:58.251 23:50:04 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:58.251 23:50:04 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:58.251 23:50:04 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:58.251 23:50:04 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:58.251 23:50:04 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:58.251 23:50:04 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:58.251 23:50:04 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:58.251 23:50:04 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.251 23:50:04 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:58.251 23:50:04 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:58.251 23:50:04 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:58.251 23:50:04 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:58.251 23:50:04 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:58.510 23:50:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:58.510 23:50:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:58.510 23:50:04 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:58.510 23:50:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:58.510 23:50:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:58.510 23:50:04 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:58.510 23:50:04 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:58.510 23:50:04 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:58.510 23:50:04 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:58.511 23:50:04 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:58.770 23:50:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:58.770 23:50:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:58.770 23:50:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:58.770 23:50:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:58.770 23:50:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:58.770 23:50:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:58.770 23:50:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:58.770 23:50:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:58.770 23:50:05 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:58.770 23:50:05 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.770 23:50:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:59.029 23:50:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:59.029 23:50:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:59.029 23:50:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:59.029 23:50:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:59.029 23:50:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:59.029 23:50:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:59.029 23:50:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:59.029 23:50:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:59.029 23:50:05 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:59.029 23:50:05 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:59.029 23:50:05 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:59.029 23:50:05 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:59.029 23:50:05 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:59.597 23:50:06 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:00.534 [2024-11-18 23:50:06.927227] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:00.534 [2024-11-18 23:50:07.008350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:00.534 [2024-11-18 23:50:07.008357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.534 [2024-11-18 23:50:07.148626] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:00.534 [2024-11-18 23:50:07.148786] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:00.534 [2024-11-18 23:50:07.148805] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:02.437 spdk_app_start Round 2 00:06:02.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:02.437 23:50:09 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:02.437 23:50:09 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:02.437 23:50:09 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59172 /var/tmp/spdk-nbd.sock 00:06:02.437 23:50:09 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59172 ']' 00:06:02.437 23:50:09 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:02.437 23:50:09 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:02.437 23:50:09 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:02.437 23:50:09 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:02.437 23:50:09 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:02.697 23:50:09 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:02.697 23:50:09 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:02.697 23:50:09 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:03.267 Malloc0 00:06:03.267 23:50:09 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:03.526 Malloc1 00:06:03.526 23:50:10 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:03.526 23:50:10 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.526 23:50:10 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:03.526 23:50:10 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:03.526 23:50:10 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:03.526 23:50:10 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:03.526 23:50:10 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:03.526 23:50:10 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.526 23:50:10 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:03.526 23:50:10 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:03.526 23:50:10 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:03.526 23:50:10 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:03.526 23:50:10 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:03.526 23:50:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:03.526 23:50:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:03.526 23:50:10 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:03.786 /dev/nbd0 00:06:03.786 23:50:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:03.786 23:50:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:03.786 23:50:10 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:03.786 23:50:10 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:03.786 23:50:10 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:03.786 23:50:10 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:03.786 23:50:10 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:03.786 23:50:10 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:03.786 23:50:10 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:03.786 23:50:10 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:03.786 23:50:10 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:03.786 1+0 records in 00:06:03.786 1+0 records out 00:06:03.786 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000242503 s, 16.9 MB/s 00:06:03.786 23:50:10 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:03.786 23:50:10 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:03.786 23:50:10 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:03.786 23:50:10 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:03.786 23:50:10 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:03.786 23:50:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:03.786 23:50:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:03.786 23:50:10 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:04.045 /dev/nbd1 00:06:04.045 23:50:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:04.045 23:50:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:04.045 23:50:10 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:04.045 23:50:10 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:04.045 23:50:10 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:04.045 23:50:10 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:04.045 23:50:10 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:04.045 23:50:10 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:04.045 23:50:10 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:04.045 23:50:10 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:04.045 23:50:10 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:04.045 1+0 records in 00:06:04.045 1+0 records out 00:06:04.045 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000263911 s, 15.5 MB/s 00:06:04.045 23:50:10 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:04.045 23:50:10 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:04.045 23:50:10 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:04.045 23:50:10 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:04.045 23:50:10 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:04.046 23:50:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:04.046 23:50:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:04.046 23:50:10 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:04.046 23:50:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.046 23:50:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:04.305 23:50:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:04.305 { 00:06:04.305 "nbd_device": "/dev/nbd0", 00:06:04.305 "bdev_name": "Malloc0" 00:06:04.305 }, 00:06:04.305 { 00:06:04.305 "nbd_device": "/dev/nbd1", 00:06:04.305 "bdev_name": "Malloc1" 00:06:04.305 } 00:06:04.305 ]' 00:06:04.305 23:50:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:04.305 { 00:06:04.305 "nbd_device": "/dev/nbd0", 00:06:04.305 "bdev_name": "Malloc0" 00:06:04.305 }, 00:06:04.305 { 00:06:04.305 "nbd_device": "/dev/nbd1", 00:06:04.305 "bdev_name": "Malloc1" 00:06:04.305 } 00:06:04.305 ]' 00:06:04.305 23:50:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:04.305 23:50:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:04.305 /dev/nbd1' 00:06:04.305 23:50:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:04.305 /dev/nbd1' 00:06:04.305 23:50:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:04.305 23:50:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:04.305 23:50:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:04.305 23:50:10 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:04.305 23:50:10 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:04.305 23:50:10 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:04.305 23:50:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:04.305 23:50:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:04.305 23:50:10 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:04.305 23:50:10 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:04.305 23:50:10 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:04.305 23:50:10 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:04.305 256+0 records in 00:06:04.305 256+0 records out 00:06:04.305 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00672271 s, 156 MB/s 00:06:04.305 23:50:10 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:04.305 23:50:10 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:04.305 256+0 records in 00:06:04.305 256+0 records out 00:06:04.305 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0245826 s, 42.7 MB/s 00:06:04.305 23:50:10 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:04.305 23:50:10 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:04.565 256+0 records in 00:06:04.565 256+0 records out 00:06:04.565 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0313382 s, 33.5 MB/s 00:06:04.565 23:50:11 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:04.565 23:50:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:04.565 23:50:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:04.565 23:50:11 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:04.565 23:50:11 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:04.565 23:50:11 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:04.565 23:50:11 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:04.565 23:50:11 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:04.565 23:50:11 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:04.565 23:50:11 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:04.565 23:50:11 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:04.565 23:50:11 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:04.565 23:50:11 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:04.565 23:50:11 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.565 23:50:11 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:04.565 23:50:11 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:04.565 23:50:11 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:04.565 23:50:11 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:04.565 23:50:11 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:04.824 23:50:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:04.824 23:50:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:04.824 23:50:11 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:04.824 23:50:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:04.824 23:50:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:04.824 23:50:11 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:04.824 23:50:11 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:04.824 23:50:11 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:04.824 23:50:11 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:04.824 23:50:11 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:05.083 23:50:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:05.083 23:50:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:05.083 23:50:11 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:05.083 23:50:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:05.083 23:50:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:05.083 23:50:11 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:05.083 23:50:11 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:05.083 23:50:11 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:05.083 23:50:11 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:05.083 23:50:11 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.083 23:50:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:05.342 23:50:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:05.342 23:50:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:05.342 23:50:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:05.342 23:50:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:05.342 23:50:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:05.342 23:50:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:05.342 23:50:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:05.342 23:50:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:05.342 23:50:12 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:05.342 23:50:12 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:05.342 23:50:12 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:05.343 23:50:12 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:05.343 23:50:12 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:05.910 23:50:12 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:06.847 [2024-11-18 23:50:13.291815] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:06.847 [2024-11-18 23:50:13.370913] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:06.847 [2024-11-18 23:50:13.370922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.847 [2024-11-18 23:50:13.511725] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:06.847 [2024-11-18 23:50:13.511858] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:06.847 [2024-11-18 23:50:13.511880] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:09.381 23:50:15 event.app_repeat -- event/event.sh@38 -- # waitforlisten 59172 /var/tmp/spdk-nbd.sock 00:06:09.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:09.381 23:50:15 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59172 ']' 00:06:09.381 23:50:15 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:09.381 23:50:15 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:09.381 23:50:15 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:09.381 23:50:15 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:09.382 23:50:15 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:09.382 23:50:15 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:09.382 23:50:15 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:09.382 23:50:15 event.app_repeat -- event/event.sh@39 -- # killprocess 59172 00:06:09.382 23:50:15 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 59172 ']' 00:06:09.382 23:50:15 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 59172 00:06:09.382 23:50:15 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:06:09.382 23:50:15 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:09.382 23:50:15 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59172 00:06:09.382 23:50:15 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:09.382 killing process with pid 59172 00:06:09.382 23:50:15 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:09.382 23:50:15 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59172' 00:06:09.382 23:50:15 event.app_repeat -- common/autotest_common.sh@973 -- # kill 59172 00:06:09.382 23:50:15 event.app_repeat -- common/autotest_common.sh@978 -- # wait 59172 00:06:09.949 spdk_app_start is called in Round 0. 00:06:09.949 Shutdown signal received, stop current app iteration 00:06:09.949 Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 reinitialization... 00:06:09.949 spdk_app_start is called in Round 1. 00:06:09.949 Shutdown signal received, stop current app iteration 00:06:09.949 Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 reinitialization... 00:06:09.949 spdk_app_start is called in Round 2. 00:06:09.949 Shutdown signal received, stop current app iteration 00:06:09.949 Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 reinitialization... 00:06:09.949 spdk_app_start is called in Round 3. 00:06:09.949 Shutdown signal received, stop current app iteration 00:06:09.949 23:50:16 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:09.949 ************************************ 00:06:09.949 END TEST app_repeat 00:06:09.949 ************************************ 00:06:09.949 23:50:16 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:09.949 00:06:09.949 real 0m20.641s 00:06:09.949 user 0m46.149s 00:06:09.949 sys 0m2.616s 00:06:09.949 23:50:16 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:09.949 23:50:16 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:09.949 23:50:16 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:09.949 23:50:16 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:09.949 23:50:16 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:09.949 23:50:16 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:09.949 23:50:16 event -- common/autotest_common.sh@10 -- # set +x 00:06:09.949 ************************************ 00:06:09.949 START TEST cpu_locks 00:06:09.949 ************************************ 00:06:09.949 23:50:16 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:10.209 * Looking for test storage... 00:06:10.209 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:10.209 23:50:16 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:10.209 23:50:16 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:10.209 23:50:16 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:06:10.209 23:50:16 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:10.209 23:50:16 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:10.209 23:50:16 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:10.209 23:50:16 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:10.209 23:50:16 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:10.209 23:50:16 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:10.209 23:50:16 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:10.209 23:50:16 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:10.209 23:50:16 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:10.209 23:50:16 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:10.209 23:50:16 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:10.209 23:50:16 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:10.209 23:50:16 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:10.209 23:50:16 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:10.209 23:50:16 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:10.209 23:50:16 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:10.209 23:50:16 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:10.209 23:50:16 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:10.209 23:50:16 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:10.209 23:50:16 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:10.209 23:50:16 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:10.209 23:50:16 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:10.209 23:50:16 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:10.209 23:50:16 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:10.209 23:50:16 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:10.209 23:50:16 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:10.209 23:50:16 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:10.209 23:50:16 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:10.209 23:50:16 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:10.209 23:50:16 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:10.209 23:50:16 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:10.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.209 --rc genhtml_branch_coverage=1 00:06:10.209 --rc genhtml_function_coverage=1 00:06:10.209 --rc genhtml_legend=1 00:06:10.209 --rc geninfo_all_blocks=1 00:06:10.209 --rc geninfo_unexecuted_blocks=1 00:06:10.209 00:06:10.209 ' 00:06:10.209 23:50:16 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:10.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.209 --rc genhtml_branch_coverage=1 00:06:10.209 --rc genhtml_function_coverage=1 00:06:10.209 --rc genhtml_legend=1 00:06:10.209 --rc geninfo_all_blocks=1 00:06:10.209 --rc geninfo_unexecuted_blocks=1 00:06:10.209 00:06:10.209 ' 00:06:10.209 23:50:16 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:10.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.210 --rc genhtml_branch_coverage=1 00:06:10.210 --rc genhtml_function_coverage=1 00:06:10.210 --rc genhtml_legend=1 00:06:10.210 --rc geninfo_all_blocks=1 00:06:10.210 --rc geninfo_unexecuted_blocks=1 00:06:10.210 00:06:10.210 ' 00:06:10.210 23:50:16 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:10.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.210 --rc genhtml_branch_coverage=1 00:06:10.210 --rc genhtml_function_coverage=1 00:06:10.210 --rc genhtml_legend=1 00:06:10.210 --rc geninfo_all_blocks=1 00:06:10.210 --rc geninfo_unexecuted_blocks=1 00:06:10.210 00:06:10.210 ' 00:06:10.210 23:50:16 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:10.210 23:50:16 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:10.210 23:50:16 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:10.210 23:50:16 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:10.210 23:50:16 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:10.210 23:50:16 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:10.210 23:50:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:10.210 ************************************ 00:06:10.210 START TEST default_locks 00:06:10.210 ************************************ 00:06:10.210 23:50:16 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:06:10.210 23:50:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=59631 00:06:10.210 23:50:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 59631 00:06:10.210 23:50:16 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 59631 ']' 00:06:10.210 23:50:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:10.210 23:50:16 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.210 23:50:16 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:10.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.210 23:50:16 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.210 23:50:16 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:10.210 23:50:16 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:10.210 [2024-11-18 23:50:16.889069] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:06:10.210 [2024-11-18 23:50:16.889241] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59631 ] 00:06:10.469 [2024-11-18 23:50:17.052706] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.469 [2024-11-18 23:50:17.143092] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.727 [2024-11-18 23:50:17.328597] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:11.295 23:50:17 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:11.295 23:50:17 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:06:11.295 23:50:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 59631 00:06:11.295 23:50:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 59631 00:06:11.295 23:50:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:11.553 23:50:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 59631 00:06:11.553 23:50:18 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 59631 ']' 00:06:11.553 23:50:18 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 59631 00:06:11.553 23:50:18 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:06:11.553 23:50:18 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:11.553 23:50:18 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59631 00:06:11.813 23:50:18 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:11.813 23:50:18 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:11.813 killing process with pid 59631 00:06:11.813 23:50:18 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59631' 00:06:11.813 23:50:18 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 59631 00:06:11.813 23:50:18 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 59631 00:06:13.715 23:50:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 59631 00:06:13.715 23:50:20 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:06:13.715 23:50:20 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59631 00:06:13.715 23:50:20 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:13.715 23:50:20 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:13.715 23:50:20 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:13.715 23:50:20 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:13.715 23:50:20 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 59631 00:06:13.715 23:50:20 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 59631 ']' 00:06:13.715 23:50:20 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.715 23:50:20 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:13.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.715 23:50:20 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.715 23:50:20 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:13.715 23:50:20 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:13.715 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59631) - No such process 00:06:13.715 ERROR: process (pid: 59631) is no longer running 00:06:13.715 23:50:20 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:13.715 23:50:20 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:06:13.715 23:50:20 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:06:13.715 23:50:20 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:13.715 23:50:20 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:13.715 23:50:20 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:13.715 23:50:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:13.715 23:50:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:13.715 23:50:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:13.715 23:50:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:13.715 00:06:13.715 real 0m3.250s 00:06:13.715 user 0m3.406s 00:06:13.715 sys 0m0.522s 00:06:13.715 23:50:20 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:13.715 23:50:20 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:13.715 ************************************ 00:06:13.715 END TEST default_locks 00:06:13.715 ************************************ 00:06:13.715 23:50:20 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:13.715 23:50:20 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:13.716 23:50:20 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:13.716 23:50:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:13.716 ************************************ 00:06:13.716 START TEST default_locks_via_rpc 00:06:13.716 ************************************ 00:06:13.716 23:50:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:06:13.716 23:50:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=59701 00:06:13.716 23:50:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:13.716 23:50:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 59701 00:06:13.716 23:50:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59701 ']' 00:06:13.716 23:50:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.716 23:50:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:13.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.716 23:50:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.716 23:50:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:13.716 23:50:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.716 [2024-11-18 23:50:20.222794] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:06:13.716 [2024-11-18 23:50:20.222981] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59701 ] 00:06:13.716 [2024-11-18 23:50:20.401620] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.974 [2024-11-18 23:50:20.484776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.233 [2024-11-18 23:50:20.678998] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:14.492 23:50:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:14.492 23:50:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:14.492 23:50:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:14.492 23:50:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:14.492 23:50:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.492 23:50:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:14.492 23:50:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:14.492 23:50:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:14.492 23:50:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:14.492 23:50:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:14.492 23:50:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:14.492 23:50:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:14.492 23:50:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.492 23:50:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:14.492 23:50:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 59701 00:06:14.492 23:50:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 59701 00:06:14.492 23:50:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:15.095 23:50:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 59701 00:06:15.095 23:50:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 59701 ']' 00:06:15.095 23:50:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 59701 00:06:15.095 23:50:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:06:15.095 23:50:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:15.095 23:50:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59701 00:06:15.095 23:50:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:15.095 23:50:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:15.095 killing process with pid 59701 00:06:15.095 23:50:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59701' 00:06:15.095 23:50:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 59701 00:06:15.095 23:50:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 59701 00:06:17.014 00:06:17.014 real 0m3.324s 00:06:17.014 user 0m3.467s 00:06:17.014 sys 0m0.594s 00:06:17.014 23:50:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:17.014 23:50:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:17.014 ************************************ 00:06:17.014 END TEST default_locks_via_rpc 00:06:17.014 ************************************ 00:06:17.014 23:50:23 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:17.014 23:50:23 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:17.014 23:50:23 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:17.014 23:50:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:17.014 ************************************ 00:06:17.014 START TEST non_locking_app_on_locked_coremask 00:06:17.014 ************************************ 00:06:17.014 23:50:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:06:17.014 23:50:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=59767 00:06:17.014 23:50:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 59767 /var/tmp/spdk.sock 00:06:17.014 23:50:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59767 ']' 00:06:17.014 23:50:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.014 23:50:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:17.014 23:50:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:17.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.014 23:50:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.014 23:50:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:17.014 23:50:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:17.014 [2024-11-18 23:50:23.595871] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:06:17.014 [2024-11-18 23:50:23.596061] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59767 ] 00:06:17.273 [2024-11-18 23:50:23.760634] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.273 [2024-11-18 23:50:23.848169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.532 [2024-11-18 23:50:24.042117] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:18.100 23:50:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:18.100 23:50:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:18.100 23:50:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=59783 00:06:18.100 23:50:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:18.100 23:50:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 59783 /var/tmp/spdk2.sock 00:06:18.100 23:50:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59783 ']' 00:06:18.100 23:50:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:18.100 23:50:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:18.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:18.100 23:50:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:18.100 23:50:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:18.100 23:50:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:18.100 [2024-11-18 23:50:24.695962] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:06:18.100 [2024-11-18 23:50:24.696160] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59783 ] 00:06:18.359 [2024-11-18 23:50:24.885338] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:18.359 [2024-11-18 23:50:24.885425] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.618 [2024-11-18 23:50:25.066783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.877 [2024-11-18 23:50:25.488153] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:19.812 23:50:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:19.812 23:50:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:19.812 23:50:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 59767 00:06:19.812 23:50:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59767 00:06:19.812 23:50:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:20.750 23:50:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 59767 00:06:20.750 23:50:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59767 ']' 00:06:20.750 23:50:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59767 00:06:20.750 23:50:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:20.750 23:50:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:20.750 23:50:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59767 00:06:20.750 23:50:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:20.750 killing process with pid 59767 00:06:20.750 23:50:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:20.750 23:50:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59767' 00:06:20.750 23:50:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59767 00:06:20.750 23:50:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59767 00:06:24.941 23:50:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 59783 00:06:24.941 23:50:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59783 ']' 00:06:24.941 23:50:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59783 00:06:24.942 23:50:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:24.942 23:50:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:24.942 23:50:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59783 00:06:24.942 23:50:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:24.942 23:50:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:24.942 killing process with pid 59783 00:06:24.942 23:50:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59783' 00:06:24.942 23:50:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59783 00:06:24.942 23:50:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59783 00:06:26.325 00:06:26.326 real 0m9.174s 00:06:26.326 user 0m9.778s 00:06:26.326 sys 0m1.180s 00:06:26.326 23:50:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:26.326 23:50:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:26.326 ************************************ 00:06:26.326 END TEST non_locking_app_on_locked_coremask 00:06:26.326 ************************************ 00:06:26.326 23:50:32 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:26.326 23:50:32 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:26.326 23:50:32 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:26.326 23:50:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:26.326 ************************************ 00:06:26.326 START TEST locking_app_on_unlocked_coremask 00:06:26.326 ************************************ 00:06:26.326 23:50:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:26.326 23:50:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59907 00:06:26.326 23:50:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59907 /var/tmp/spdk.sock 00:06:26.326 23:50:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59907 ']' 00:06:26.326 23:50:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:26.326 23:50:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.326 23:50:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:26.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.326 23:50:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.326 23:50:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:26.326 23:50:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:26.326 [2024-11-18 23:50:32.832426] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:06:26.326 [2024-11-18 23:50:32.832671] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59907 ] 00:06:26.586 [2024-11-18 23:50:33.016015] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:26.586 [2024-11-18 23:50:33.016128] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.586 [2024-11-18 23:50:33.102159] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.845 [2024-11-18 23:50:33.290400] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:27.105 23:50:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:27.105 23:50:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:27.105 23:50:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59923 00:06:27.105 23:50:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59923 /var/tmp/spdk2.sock 00:06:27.105 23:50:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59923 ']' 00:06:27.105 23:50:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:27.105 23:50:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:27.105 23:50:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:27.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:27.105 23:50:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:27.105 23:50:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:27.105 23:50:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:27.364 [2024-11-18 23:50:33.911747] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:06:27.364 [2024-11-18 23:50:33.911939] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59923 ] 00:06:27.623 [2024-11-18 23:50:34.099513] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.623 [2024-11-18 23:50:34.269918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.192 [2024-11-18 23:50:34.677812] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:29.132 23:50:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:29.132 23:50:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:29.132 23:50:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59923 00:06:29.132 23:50:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59923 00:06:29.132 23:50:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:30.071 23:50:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59907 00:06:30.071 23:50:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59907 ']' 00:06:30.071 23:50:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59907 00:06:30.071 23:50:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:30.071 23:50:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:30.071 23:50:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59907 00:06:30.071 killing process with pid 59907 00:06:30.071 23:50:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:30.071 23:50:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:30.071 23:50:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59907' 00:06:30.071 23:50:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59907 00:06:30.071 23:50:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59907 00:06:33.364 23:50:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59923 00:06:33.364 23:50:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59923 ']' 00:06:33.364 23:50:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59923 00:06:33.364 23:50:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:33.364 23:50:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:33.364 23:50:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59923 00:06:33.624 killing process with pid 59923 00:06:33.624 23:50:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:33.624 23:50:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:33.624 23:50:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59923' 00:06:33.624 23:50:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59923 00:06:33.624 23:50:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59923 00:06:35.613 00:06:35.613 real 0m9.124s 00:06:35.613 user 0m9.649s 00:06:35.613 sys 0m1.219s 00:06:35.613 23:50:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:35.613 ************************************ 00:06:35.613 23:50:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:35.613 END TEST locking_app_on_unlocked_coremask 00:06:35.613 ************************************ 00:06:35.613 23:50:41 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:35.613 23:50:41 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:35.613 23:50:41 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:35.613 23:50:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:35.613 ************************************ 00:06:35.613 START TEST locking_app_on_locked_coremask 00:06:35.613 ************************************ 00:06:35.613 23:50:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:35.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.613 23:50:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=60049 00:06:35.614 23:50:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 60049 /var/tmp/spdk.sock 00:06:35.614 23:50:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60049 ']' 00:06:35.614 23:50:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.614 23:50:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:35.614 23:50:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:35.614 23:50:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.614 23:50:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:35.614 23:50:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:35.614 [2024-11-18 23:50:42.017928] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:06:35.614 [2024-11-18 23:50:42.018140] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60049 ] 00:06:35.614 [2024-11-18 23:50:42.199585] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.899 [2024-11-18 23:50:42.298276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.899 [2024-11-18 23:50:42.493137] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:36.469 23:50:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:36.469 23:50:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:36.469 23:50:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=60066 00:06:36.469 23:50:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:36.469 23:50:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 60066 /var/tmp/spdk2.sock 00:06:36.469 23:50:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:36.469 23:50:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60066 /var/tmp/spdk2.sock 00:06:36.469 23:50:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:36.469 23:50:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:36.469 23:50:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:36.469 23:50:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:36.469 23:50:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 60066 /var/tmp/spdk2.sock 00:06:36.469 23:50:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60066 ']' 00:06:36.469 23:50:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:36.469 23:50:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:36.469 23:50:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:36.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:36.469 23:50:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:36.469 23:50:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:36.469 [2024-11-18 23:50:43.102481] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:06:36.469 [2024-11-18 23:50:43.102892] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60066 ] 00:06:36.729 [2024-11-18 23:50:43.282349] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 60049 has claimed it. 00:06:36.729 [2024-11-18 23:50:43.282452] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:37.297 ERROR: process (pid: 60066) is no longer running 00:06:37.297 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60066) - No such process 00:06:37.297 23:50:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:37.297 23:50:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:37.297 23:50:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:37.297 23:50:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:37.297 23:50:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:37.297 23:50:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:37.297 23:50:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 60049 00:06:37.297 23:50:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:37.297 23:50:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60049 00:06:37.557 23:50:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 60049 00:06:37.557 23:50:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60049 ']' 00:06:37.557 23:50:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60049 00:06:37.557 23:50:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:37.557 23:50:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:37.557 23:50:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60049 00:06:37.557 killing process with pid 60049 00:06:37.557 23:50:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:37.557 23:50:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:37.557 23:50:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60049' 00:06:37.557 23:50:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60049 00:06:37.557 23:50:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60049 00:06:39.465 00:06:39.465 real 0m4.111s 00:06:39.465 user 0m4.521s 00:06:39.465 sys 0m0.719s 00:06:39.465 23:50:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:39.465 23:50:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:39.465 ************************************ 00:06:39.465 END TEST locking_app_on_locked_coremask 00:06:39.465 ************************************ 00:06:39.465 23:50:46 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:39.465 23:50:46 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:39.465 23:50:46 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:39.465 23:50:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:39.465 ************************************ 00:06:39.465 START TEST locking_overlapped_coremask 00:06:39.465 ************************************ 00:06:39.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.465 23:50:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:39.465 23:50:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=60125 00:06:39.465 23:50:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:39.465 23:50:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 60125 /var/tmp/spdk.sock 00:06:39.465 23:50:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 60125 ']' 00:06:39.465 23:50:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.465 23:50:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:39.465 23:50:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.465 23:50:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:39.465 23:50:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:39.724 [2024-11-18 23:50:46.224693] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:06:39.724 [2024-11-18 23:50:46.225106] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60125 ] 00:06:39.724 [2024-11-18 23:50:46.404696] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:39.983 [2024-11-18 23:50:46.489137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:39.983 [2024-11-18 23:50:46.489246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.983 [2024-11-18 23:50:46.489292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:40.242 [2024-11-18 23:50:46.678783] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:40.809 23:50:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:40.809 23:50:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:40.809 23:50:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=60148 00:06:40.809 23:50:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:40.809 23:50:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 60148 /var/tmp/spdk2.sock 00:06:40.809 23:50:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:40.809 23:50:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60148 /var/tmp/spdk2.sock 00:06:40.809 23:50:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:40.809 23:50:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:40.809 23:50:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:40.809 23:50:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:40.809 23:50:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 60148 /var/tmp/spdk2.sock 00:06:40.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:40.809 23:50:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 60148 ']' 00:06:40.809 23:50:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:40.809 23:50:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:40.809 23:50:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:40.809 23:50:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:40.809 23:50:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:40.809 [2024-11-18 23:50:47.349163] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:06:40.809 [2024-11-18 23:50:47.349350] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60148 ] 00:06:41.069 [2024-11-18 23:50:47.545972] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60125 has claimed it. 00:06:41.069 [2024-11-18 23:50:47.546065] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:41.328 ERROR: process (pid: 60148) is no longer running 00:06:41.328 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60148) - No such process 00:06:41.328 23:50:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:41.328 23:50:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:41.328 23:50:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:41.328 23:50:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:41.328 23:50:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:41.328 23:50:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:41.328 23:50:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:41.329 23:50:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:41.329 23:50:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:41.329 23:50:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:41.329 23:50:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 60125 00:06:41.329 23:50:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 60125 ']' 00:06:41.329 23:50:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 60125 00:06:41.329 23:50:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:41.329 23:50:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:41.329 23:50:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60125 00:06:41.588 23:50:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:41.588 23:50:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:41.588 23:50:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60125' 00:06:41.588 killing process with pid 60125 00:06:41.588 23:50:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 60125 00:06:41.588 23:50:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 60125 00:06:43.493 00:06:43.493 real 0m3.996s 00:06:43.493 user 0m10.932s 00:06:43.493 sys 0m0.613s 00:06:43.493 23:50:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:43.493 23:50:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:43.493 ************************************ 00:06:43.493 END TEST locking_overlapped_coremask 00:06:43.493 ************************************ 00:06:43.493 23:50:50 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:43.493 23:50:50 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:43.493 23:50:50 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:43.493 23:50:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:43.493 ************************************ 00:06:43.493 START TEST locking_overlapped_coremask_via_rpc 00:06:43.493 ************************************ 00:06:43.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:43.493 23:50:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:43.493 23:50:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=60207 00:06:43.493 23:50:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 60207 /var/tmp/spdk.sock 00:06:43.493 23:50:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:43.493 23:50:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60207 ']' 00:06:43.493 23:50:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:43.493 23:50:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:43.493 23:50:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:43.493 23:50:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:43.493 23:50:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:43.753 [2024-11-18 23:50:50.231225] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:06:43.753 [2024-11-18 23:50:50.232236] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60207 ] 00:06:43.753 [2024-11-18 23:50:50.417228] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:43.753 [2024-11-18 23:50:50.417443] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:44.013 [2024-11-18 23:50:50.522822] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:44.013 [2024-11-18 23:50:50.522984] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.013 [2024-11-18 23:50:50.522994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:44.272 [2024-11-18 23:50:50.723495] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:44.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:44.841 23:50:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:44.841 23:50:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:44.841 23:50:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=60225 00:06:44.841 23:50:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 60225 /var/tmp/spdk2.sock 00:06:44.841 23:50:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:44.841 23:50:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60225 ']' 00:06:44.841 23:50:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:44.841 23:50:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:44.841 23:50:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:44.841 23:50:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:44.841 23:50:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:44.841 [2024-11-18 23:50:51.354662] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:06:44.841 [2024-11-18 23:50:51.355477] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60225 ] 00:06:45.100 [2024-11-18 23:50:51.546314] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:45.100 [2024-11-18 23:50:51.546378] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:45.100 [2024-11-18 23:50:51.740242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:45.100 [2024-11-18 23:50:51.740331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:45.100 [2024-11-18 23:50:51.740340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:45.668 [2024-11-18 23:50:52.170127] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:46.606 23:50:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:46.606 23:50:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:46.606 23:50:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:46.606 23:50:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:46.606 23:50:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:46.606 23:50:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:46.606 23:50:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:46.606 23:50:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:46.606 23:50:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:46.606 23:50:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:46.606 23:50:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:46.606 23:50:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:46.606 23:50:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:46.606 23:50:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:46.606 23:50:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:46.606 23:50:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:46.606 [2024-11-18 23:50:53.159864] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60207 has claimed it. 00:06:46.606 request: 00:06:46.606 { 00:06:46.606 "method": "framework_enable_cpumask_locks", 00:06:46.606 "req_id": 1 00:06:46.606 } 00:06:46.606 Got JSON-RPC error response 00:06:46.606 response: 00:06:46.606 { 00:06:46.606 "code": -32603, 00:06:46.606 "message": "Failed to claim CPU core: 2" 00:06:46.606 } 00:06:46.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:46.607 23:50:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:46.607 23:50:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:46.607 23:50:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:46.607 23:50:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:46.607 23:50:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:46.607 23:50:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 60207 /var/tmp/spdk.sock 00:06:46.607 23:50:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60207 ']' 00:06:46.607 23:50:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:46.607 23:50:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:46.607 23:50:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:46.607 23:50:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:46.607 23:50:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:46.866 23:50:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:46.866 23:50:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:46.866 23:50:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 60225 /var/tmp/spdk2.sock 00:06:46.866 23:50:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60225 ']' 00:06:46.866 23:50:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:46.866 23:50:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:46.866 23:50:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:46.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:46.866 23:50:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:46.866 23:50:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:47.125 23:50:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:47.125 23:50:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:47.125 23:50:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:47.125 23:50:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:47.125 23:50:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:47.125 ************************************ 00:06:47.125 END TEST locking_overlapped_coremask_via_rpc 00:06:47.125 ************************************ 00:06:47.125 23:50:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:47.125 00:06:47.125 real 0m3.668s 00:06:47.125 user 0m1.433s 00:06:47.125 sys 0m0.185s 00:06:47.125 23:50:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:47.125 23:50:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:47.125 23:50:53 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:47.125 23:50:53 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60207 ]] 00:06:47.125 23:50:53 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60207 00:06:47.125 23:50:53 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60207 ']' 00:06:47.125 23:50:53 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60207 00:06:47.125 23:50:53 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:47.125 23:50:53 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:47.125 23:50:53 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60207 00:06:47.384 killing process with pid 60207 00:06:47.384 23:50:53 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:47.384 23:50:53 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:47.384 23:50:53 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60207' 00:06:47.384 23:50:53 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 60207 00:06:47.384 23:50:53 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 60207 00:06:49.290 23:50:55 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60225 ]] 00:06:49.290 23:50:55 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60225 00:06:49.290 23:50:55 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60225 ']' 00:06:49.290 23:50:55 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60225 00:06:49.290 23:50:55 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:49.290 23:50:55 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:49.290 23:50:55 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60225 00:06:49.290 killing process with pid 60225 00:06:49.290 23:50:55 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:49.290 23:50:55 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:49.290 23:50:55 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60225' 00:06:49.290 23:50:55 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 60225 00:06:49.290 23:50:55 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 60225 00:06:51.194 23:50:57 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:51.194 23:50:57 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:51.194 23:50:57 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60207 ]] 00:06:51.194 23:50:57 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60207 00:06:51.194 Process with pid 60207 is not found 00:06:51.194 23:50:57 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60207 ']' 00:06:51.194 23:50:57 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60207 00:06:51.194 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (60207) - No such process 00:06:51.194 23:50:57 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 60207 is not found' 00:06:51.194 23:50:57 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60225 ]] 00:06:51.194 Process with pid 60225 is not found 00:06:51.194 23:50:57 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60225 00:06:51.194 23:50:57 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60225 ']' 00:06:51.194 23:50:57 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60225 00:06:51.194 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (60225) - No such process 00:06:51.194 23:50:57 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 60225 is not found' 00:06:51.194 23:50:57 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:51.194 00:06:51.194 real 0m41.032s 00:06:51.194 user 1m11.726s 00:06:51.194 sys 0m6.032s 00:06:51.194 23:50:57 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:51.194 ************************************ 00:06:51.194 END TEST cpu_locks 00:06:51.194 ************************************ 00:06:51.194 23:50:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:51.194 ************************************ 00:06:51.194 END TEST event 00:06:51.194 ************************************ 00:06:51.194 00:06:51.194 real 1m12.100s 00:06:51.194 user 2m14.552s 00:06:51.194 sys 0m9.614s 00:06:51.194 23:50:57 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:51.194 23:50:57 event -- common/autotest_common.sh@10 -- # set +x 00:06:51.194 23:50:57 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:51.194 23:50:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:51.194 23:50:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:51.194 23:50:57 -- common/autotest_common.sh@10 -- # set +x 00:06:51.194 ************************************ 00:06:51.194 START TEST thread 00:06:51.194 ************************************ 00:06:51.194 23:50:57 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:51.194 * Looking for test storage... 00:06:51.194 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:51.194 23:50:57 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:51.194 23:50:57 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:06:51.194 23:50:57 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:51.453 23:50:57 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:51.453 23:50:57 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:51.453 23:50:57 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:51.453 23:50:57 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:51.453 23:50:57 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:51.453 23:50:57 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:51.453 23:50:57 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:51.453 23:50:57 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:51.453 23:50:57 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:51.453 23:50:57 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:51.453 23:50:57 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:51.453 23:50:57 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:51.453 23:50:57 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:51.453 23:50:57 thread -- scripts/common.sh@345 -- # : 1 00:06:51.453 23:50:57 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:51.453 23:50:57 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:51.453 23:50:57 thread -- scripts/common.sh@365 -- # decimal 1 00:06:51.453 23:50:57 thread -- scripts/common.sh@353 -- # local d=1 00:06:51.453 23:50:57 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:51.453 23:50:57 thread -- scripts/common.sh@355 -- # echo 1 00:06:51.453 23:50:57 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:51.453 23:50:57 thread -- scripts/common.sh@366 -- # decimal 2 00:06:51.453 23:50:57 thread -- scripts/common.sh@353 -- # local d=2 00:06:51.453 23:50:57 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:51.453 23:50:57 thread -- scripts/common.sh@355 -- # echo 2 00:06:51.453 23:50:57 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:51.453 23:50:57 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:51.453 23:50:57 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:51.453 23:50:57 thread -- scripts/common.sh@368 -- # return 0 00:06:51.453 23:50:57 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:51.453 23:50:57 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:51.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.453 --rc genhtml_branch_coverage=1 00:06:51.453 --rc genhtml_function_coverage=1 00:06:51.453 --rc genhtml_legend=1 00:06:51.453 --rc geninfo_all_blocks=1 00:06:51.453 --rc geninfo_unexecuted_blocks=1 00:06:51.453 00:06:51.453 ' 00:06:51.453 23:50:57 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:51.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.453 --rc genhtml_branch_coverage=1 00:06:51.453 --rc genhtml_function_coverage=1 00:06:51.453 --rc genhtml_legend=1 00:06:51.453 --rc geninfo_all_blocks=1 00:06:51.453 --rc geninfo_unexecuted_blocks=1 00:06:51.453 00:06:51.453 ' 00:06:51.453 23:50:57 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:51.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.453 --rc genhtml_branch_coverage=1 00:06:51.453 --rc genhtml_function_coverage=1 00:06:51.453 --rc genhtml_legend=1 00:06:51.453 --rc geninfo_all_blocks=1 00:06:51.453 --rc geninfo_unexecuted_blocks=1 00:06:51.453 00:06:51.453 ' 00:06:51.453 23:50:57 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:51.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.453 --rc genhtml_branch_coverage=1 00:06:51.453 --rc genhtml_function_coverage=1 00:06:51.453 --rc genhtml_legend=1 00:06:51.453 --rc geninfo_all_blocks=1 00:06:51.453 --rc geninfo_unexecuted_blocks=1 00:06:51.453 00:06:51.453 ' 00:06:51.453 23:50:57 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:51.453 23:50:57 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:51.453 23:50:57 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:51.453 23:50:57 thread -- common/autotest_common.sh@10 -- # set +x 00:06:51.453 ************************************ 00:06:51.454 START TEST thread_poller_perf 00:06:51.454 ************************************ 00:06:51.454 23:50:57 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:51.454 [2024-11-18 23:50:57.958474] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:06:51.454 [2024-11-18 23:50:57.959338] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60401 ] 00:06:51.714 [2024-11-18 23:50:58.147335] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.714 [2024-11-18 23:50:58.270383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.714 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:53.097 [2024-11-18T23:50:59.789Z] ====================================== 00:06:53.097 [2024-11-18T23:50:59.789Z] busy:2211646602 (cyc) 00:06:53.097 [2024-11-18T23:50:59.789Z] total_run_count: 344000 00:06:53.097 [2024-11-18T23:50:59.789Z] tsc_hz: 2200000000 (cyc) 00:06:53.097 [2024-11-18T23:50:59.789Z] ====================================== 00:06:53.097 [2024-11-18T23:50:59.789Z] poller_cost: 6429 (cyc), 2922 (nsec) 00:06:53.097 00:06:53.097 ************************************ 00:06:53.097 END TEST thread_poller_perf 00:06:53.097 ************************************ 00:06:53.097 real 0m1.559s 00:06:53.097 user 0m1.365s 00:06:53.097 sys 0m0.084s 00:06:53.097 23:50:59 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:53.097 23:50:59 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:53.097 23:50:59 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:53.097 23:50:59 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:53.097 23:50:59 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:53.097 23:50:59 thread -- common/autotest_common.sh@10 -- # set +x 00:06:53.097 ************************************ 00:06:53.097 START TEST thread_poller_perf 00:06:53.097 ************************************ 00:06:53.097 23:50:59 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:53.097 [2024-11-18 23:50:59.565146] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:06:53.097 [2024-11-18 23:50:59.565307] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60438 ] 00:06:53.097 [2024-11-18 23:50:59.746433] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.356 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:53.356 [2024-11-18 23:50:59.839044] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.736 [2024-11-18T23:51:01.428Z] ====================================== 00:06:54.736 [2024-11-18T23:51:01.428Z] busy:2203717248 (cyc) 00:06:54.736 [2024-11-18T23:51:01.428Z] total_run_count: 4356000 00:06:54.736 [2024-11-18T23:51:01.428Z] tsc_hz: 2200000000 (cyc) 00:06:54.736 [2024-11-18T23:51:01.428Z] ====================================== 00:06:54.736 [2024-11-18T23:51:01.428Z] poller_cost: 505 (cyc), 229 (nsec) 00:06:54.736 00:06:54.736 real 0m1.510s 00:06:54.736 user 0m1.317s 00:06:54.736 sys 0m0.085s 00:06:54.736 ************************************ 00:06:54.736 END TEST thread_poller_perf 00:06:54.736 ************************************ 00:06:54.736 23:51:01 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:54.736 23:51:01 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:54.736 23:51:01 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:54.736 ************************************ 00:06:54.736 END TEST thread 00:06:54.736 ************************************ 00:06:54.736 00:06:54.736 real 0m3.345s 00:06:54.736 user 0m2.821s 00:06:54.736 sys 0m0.304s 00:06:54.736 23:51:01 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:54.736 23:51:01 thread -- common/autotest_common.sh@10 -- # set +x 00:06:54.736 23:51:01 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:54.736 23:51:01 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:54.736 23:51:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:54.736 23:51:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:54.736 23:51:01 -- common/autotest_common.sh@10 -- # set +x 00:06:54.736 ************************************ 00:06:54.736 START TEST app_cmdline 00:06:54.736 ************************************ 00:06:54.736 23:51:01 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:54.736 * Looking for test storage... 00:06:54.736 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:54.736 23:51:01 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:54.736 23:51:01 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:06:54.736 23:51:01 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:54.736 23:51:01 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:54.736 23:51:01 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:54.736 23:51:01 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:54.736 23:51:01 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:54.736 23:51:01 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:54.736 23:51:01 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:54.736 23:51:01 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:54.736 23:51:01 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:54.736 23:51:01 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:54.736 23:51:01 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:54.736 23:51:01 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:54.736 23:51:01 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:54.736 23:51:01 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:54.736 23:51:01 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:54.736 23:51:01 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:54.736 23:51:01 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:54.736 23:51:01 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:54.736 23:51:01 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:54.736 23:51:01 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:54.736 23:51:01 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:54.736 23:51:01 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:54.736 23:51:01 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:54.736 23:51:01 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:54.736 23:51:01 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:54.736 23:51:01 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:54.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:54.736 23:51:01 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:54.736 23:51:01 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:54.736 23:51:01 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:54.736 23:51:01 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:54.737 23:51:01 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:54.737 23:51:01 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:54.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.737 --rc genhtml_branch_coverage=1 00:06:54.737 --rc genhtml_function_coverage=1 00:06:54.737 --rc genhtml_legend=1 00:06:54.737 --rc geninfo_all_blocks=1 00:06:54.737 --rc geninfo_unexecuted_blocks=1 00:06:54.737 00:06:54.737 ' 00:06:54.737 23:51:01 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:54.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.737 --rc genhtml_branch_coverage=1 00:06:54.737 --rc genhtml_function_coverage=1 00:06:54.737 --rc genhtml_legend=1 00:06:54.737 --rc geninfo_all_blocks=1 00:06:54.737 --rc geninfo_unexecuted_blocks=1 00:06:54.737 00:06:54.737 ' 00:06:54.737 23:51:01 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:54.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.737 --rc genhtml_branch_coverage=1 00:06:54.737 --rc genhtml_function_coverage=1 00:06:54.737 --rc genhtml_legend=1 00:06:54.737 --rc geninfo_all_blocks=1 00:06:54.737 --rc geninfo_unexecuted_blocks=1 00:06:54.737 00:06:54.737 ' 00:06:54.737 23:51:01 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:54.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.737 --rc genhtml_branch_coverage=1 00:06:54.737 --rc genhtml_function_coverage=1 00:06:54.737 --rc genhtml_legend=1 00:06:54.737 --rc geninfo_all_blocks=1 00:06:54.737 --rc geninfo_unexecuted_blocks=1 00:06:54.737 00:06:54.737 ' 00:06:54.737 23:51:01 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:54.737 23:51:01 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=60527 00:06:54.737 23:51:01 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 60527 00:06:54.737 23:51:01 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:54.737 23:51:01 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 60527 ']' 00:06:54.737 23:51:01 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:54.737 23:51:01 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:54.737 23:51:01 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:54.737 23:51:01 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:54.737 23:51:01 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:54.996 [2024-11-18 23:51:01.427280] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:06:54.996 [2024-11-18 23:51:01.427982] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60527 ] 00:06:54.996 [2024-11-18 23:51:01.609725] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.255 [2024-11-18 23:51:01.706939] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.255 [2024-11-18 23:51:01.904861] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:55.834 23:51:02 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:55.834 23:51:02 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:55.834 23:51:02 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:56.097 { 00:06:56.097 "version": "SPDK v25.01-pre git sha1 d47eb51c9", 00:06:56.097 "fields": { 00:06:56.097 "major": 25, 00:06:56.097 "minor": 1, 00:06:56.097 "patch": 0, 00:06:56.097 "suffix": "-pre", 00:06:56.097 "commit": "d47eb51c9" 00:06:56.097 } 00:06:56.097 } 00:06:56.097 23:51:02 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:56.097 23:51:02 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:56.097 23:51:02 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:56.097 23:51:02 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:56.097 23:51:02 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:56.097 23:51:02 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:56.097 23:51:02 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:56.097 23:51:02 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.097 23:51:02 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:56.097 23:51:02 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.097 23:51:02 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:56.097 23:51:02 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:56.097 23:51:02 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:56.097 23:51:02 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:56.097 23:51:02 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:56.097 23:51:02 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:56.097 23:51:02 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:56.097 23:51:02 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:56.097 23:51:02 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:56.097 23:51:02 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:56.097 23:51:02 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:56.097 23:51:02 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:56.097 23:51:02 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:56.097 23:51:02 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:56.356 request: 00:06:56.356 { 00:06:56.356 "method": "env_dpdk_get_mem_stats", 00:06:56.356 "req_id": 1 00:06:56.356 } 00:06:56.356 Got JSON-RPC error response 00:06:56.356 response: 00:06:56.356 { 00:06:56.356 "code": -32601, 00:06:56.356 "message": "Method not found" 00:06:56.356 } 00:06:56.356 23:51:03 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:56.356 23:51:03 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:56.356 23:51:03 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:56.356 23:51:03 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:56.356 23:51:03 app_cmdline -- app/cmdline.sh@1 -- # killprocess 60527 00:06:56.356 23:51:03 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 60527 ']' 00:06:56.356 23:51:03 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 60527 00:06:56.356 23:51:03 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:56.356 23:51:03 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:56.356 23:51:03 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60527 00:06:56.616 killing process with pid 60527 00:06:56.616 23:51:03 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:56.616 23:51:03 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:56.616 23:51:03 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60527' 00:06:56.616 23:51:03 app_cmdline -- common/autotest_common.sh@973 -- # kill 60527 00:06:56.616 23:51:03 app_cmdline -- common/autotest_common.sh@978 -- # wait 60527 00:06:58.520 ************************************ 00:06:58.520 END TEST app_cmdline 00:06:58.520 ************************************ 00:06:58.520 00:06:58.520 real 0m3.677s 00:06:58.520 user 0m4.246s 00:06:58.520 sys 0m0.533s 00:06:58.520 23:51:04 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:58.520 23:51:04 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:58.520 23:51:04 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:58.520 23:51:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:58.520 23:51:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:58.520 23:51:04 -- common/autotest_common.sh@10 -- # set +x 00:06:58.520 ************************************ 00:06:58.520 START TEST version 00:06:58.520 ************************************ 00:06:58.520 23:51:04 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:58.520 * Looking for test storage... 00:06:58.520 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:58.520 23:51:04 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:58.520 23:51:04 version -- common/autotest_common.sh@1693 -- # lcov --version 00:06:58.520 23:51:04 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:58.520 23:51:05 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:58.520 23:51:05 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:58.520 23:51:05 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:58.520 23:51:05 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:58.520 23:51:05 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:58.520 23:51:05 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:58.520 23:51:05 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:58.520 23:51:05 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:58.520 23:51:05 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:58.520 23:51:05 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:58.520 23:51:05 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:58.520 23:51:05 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:58.520 23:51:05 version -- scripts/common.sh@344 -- # case "$op" in 00:06:58.520 23:51:05 version -- scripts/common.sh@345 -- # : 1 00:06:58.520 23:51:05 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:58.520 23:51:05 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:58.520 23:51:05 version -- scripts/common.sh@365 -- # decimal 1 00:06:58.520 23:51:05 version -- scripts/common.sh@353 -- # local d=1 00:06:58.520 23:51:05 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:58.520 23:51:05 version -- scripts/common.sh@355 -- # echo 1 00:06:58.520 23:51:05 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:58.520 23:51:05 version -- scripts/common.sh@366 -- # decimal 2 00:06:58.520 23:51:05 version -- scripts/common.sh@353 -- # local d=2 00:06:58.520 23:51:05 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:58.520 23:51:05 version -- scripts/common.sh@355 -- # echo 2 00:06:58.521 23:51:05 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:58.521 23:51:05 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:58.521 23:51:05 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:58.521 23:51:05 version -- scripts/common.sh@368 -- # return 0 00:06:58.521 23:51:05 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:58.521 23:51:05 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:58.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.521 --rc genhtml_branch_coverage=1 00:06:58.521 --rc genhtml_function_coverage=1 00:06:58.521 --rc genhtml_legend=1 00:06:58.521 --rc geninfo_all_blocks=1 00:06:58.521 --rc geninfo_unexecuted_blocks=1 00:06:58.521 00:06:58.521 ' 00:06:58.521 23:51:05 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:58.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.521 --rc genhtml_branch_coverage=1 00:06:58.521 --rc genhtml_function_coverage=1 00:06:58.521 --rc genhtml_legend=1 00:06:58.521 --rc geninfo_all_blocks=1 00:06:58.521 --rc geninfo_unexecuted_blocks=1 00:06:58.521 00:06:58.521 ' 00:06:58.521 23:51:05 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:58.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.521 --rc genhtml_branch_coverage=1 00:06:58.521 --rc genhtml_function_coverage=1 00:06:58.521 --rc genhtml_legend=1 00:06:58.521 --rc geninfo_all_blocks=1 00:06:58.521 --rc geninfo_unexecuted_blocks=1 00:06:58.521 00:06:58.521 ' 00:06:58.521 23:51:05 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:58.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.521 --rc genhtml_branch_coverage=1 00:06:58.521 --rc genhtml_function_coverage=1 00:06:58.521 --rc genhtml_legend=1 00:06:58.521 --rc geninfo_all_blocks=1 00:06:58.521 --rc geninfo_unexecuted_blocks=1 00:06:58.521 00:06:58.521 ' 00:06:58.521 23:51:05 version -- app/version.sh@17 -- # get_header_version major 00:06:58.521 23:51:05 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:58.521 23:51:05 version -- app/version.sh@14 -- # tr -d '"' 00:06:58.521 23:51:05 version -- app/version.sh@14 -- # cut -f2 00:06:58.521 23:51:05 version -- app/version.sh@17 -- # major=25 00:06:58.521 23:51:05 version -- app/version.sh@18 -- # get_header_version minor 00:06:58.521 23:51:05 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:58.521 23:51:05 version -- app/version.sh@14 -- # cut -f2 00:06:58.521 23:51:05 version -- app/version.sh@14 -- # tr -d '"' 00:06:58.521 23:51:05 version -- app/version.sh@18 -- # minor=1 00:06:58.521 23:51:05 version -- app/version.sh@19 -- # get_header_version patch 00:06:58.521 23:51:05 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:58.521 23:51:05 version -- app/version.sh@14 -- # cut -f2 00:06:58.521 23:51:05 version -- app/version.sh@14 -- # tr -d '"' 00:06:58.521 23:51:05 version -- app/version.sh@19 -- # patch=0 00:06:58.521 23:51:05 version -- app/version.sh@20 -- # get_header_version suffix 00:06:58.521 23:51:05 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:58.521 23:51:05 version -- app/version.sh@14 -- # cut -f2 00:06:58.521 23:51:05 version -- app/version.sh@14 -- # tr -d '"' 00:06:58.521 23:51:05 version -- app/version.sh@20 -- # suffix=-pre 00:06:58.521 23:51:05 version -- app/version.sh@22 -- # version=25.1 00:06:58.521 23:51:05 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:58.521 23:51:05 version -- app/version.sh@28 -- # version=25.1rc0 00:06:58.521 23:51:05 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:58.521 23:51:05 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:58.521 23:51:05 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:58.521 23:51:05 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:58.521 00:06:58.521 real 0m0.248s 00:06:58.521 user 0m0.164s 00:06:58.521 sys 0m0.118s 00:06:58.521 ************************************ 00:06:58.521 END TEST version 00:06:58.521 ************************************ 00:06:58.521 23:51:05 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:58.521 23:51:05 version -- common/autotest_common.sh@10 -- # set +x 00:06:58.521 23:51:05 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:58.521 23:51:05 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:58.521 23:51:05 -- spdk/autotest.sh@194 -- # uname -s 00:06:58.521 23:51:05 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:58.521 23:51:05 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:58.521 23:51:05 -- spdk/autotest.sh@195 -- # [[ 1 -eq 1 ]] 00:06:58.521 23:51:05 -- spdk/autotest.sh@201 -- # [[ 0 -eq 0 ]] 00:06:58.521 23:51:05 -- spdk/autotest.sh@202 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:06:58.521 23:51:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:58.521 23:51:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:58.521 23:51:05 -- common/autotest_common.sh@10 -- # set +x 00:06:58.521 ************************************ 00:06:58.521 START TEST spdk_dd 00:06:58.521 ************************************ 00:06:58.521 23:51:05 spdk_dd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:06:58.780 * Looking for test storage... 00:06:58.780 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:58.780 23:51:05 spdk_dd -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:58.780 23:51:05 spdk_dd -- common/autotest_common.sh@1693 -- # lcov --version 00:06:58.780 23:51:05 spdk_dd -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:58.780 23:51:05 spdk_dd -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:58.780 23:51:05 spdk_dd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:58.780 23:51:05 spdk_dd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:58.780 23:51:05 spdk_dd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:58.780 23:51:05 spdk_dd -- scripts/common.sh@336 -- # IFS=.-: 00:06:58.780 23:51:05 spdk_dd -- scripts/common.sh@336 -- # read -ra ver1 00:06:58.780 23:51:05 spdk_dd -- scripts/common.sh@337 -- # IFS=.-: 00:06:58.780 23:51:05 spdk_dd -- scripts/common.sh@337 -- # read -ra ver2 00:06:58.780 23:51:05 spdk_dd -- scripts/common.sh@338 -- # local 'op=<' 00:06:58.780 23:51:05 spdk_dd -- scripts/common.sh@340 -- # ver1_l=2 00:06:58.780 23:51:05 spdk_dd -- scripts/common.sh@341 -- # ver2_l=1 00:06:58.780 23:51:05 spdk_dd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:58.780 23:51:05 spdk_dd -- scripts/common.sh@344 -- # case "$op" in 00:06:58.780 23:51:05 spdk_dd -- scripts/common.sh@345 -- # : 1 00:06:58.780 23:51:05 spdk_dd -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:58.780 23:51:05 spdk_dd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:58.780 23:51:05 spdk_dd -- scripts/common.sh@365 -- # decimal 1 00:06:58.780 23:51:05 spdk_dd -- scripts/common.sh@353 -- # local d=1 00:06:58.780 23:51:05 spdk_dd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:58.780 23:51:05 spdk_dd -- scripts/common.sh@355 -- # echo 1 00:06:58.780 23:51:05 spdk_dd -- scripts/common.sh@365 -- # ver1[v]=1 00:06:58.780 23:51:05 spdk_dd -- scripts/common.sh@366 -- # decimal 2 00:06:58.780 23:51:05 spdk_dd -- scripts/common.sh@353 -- # local d=2 00:06:58.780 23:51:05 spdk_dd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:58.780 23:51:05 spdk_dd -- scripts/common.sh@355 -- # echo 2 00:06:58.780 23:51:05 spdk_dd -- scripts/common.sh@366 -- # ver2[v]=2 00:06:58.780 23:51:05 spdk_dd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:58.780 23:51:05 spdk_dd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:58.780 23:51:05 spdk_dd -- scripts/common.sh@368 -- # return 0 00:06:58.780 23:51:05 spdk_dd -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:58.780 23:51:05 spdk_dd -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:58.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.780 --rc genhtml_branch_coverage=1 00:06:58.780 --rc genhtml_function_coverage=1 00:06:58.780 --rc genhtml_legend=1 00:06:58.780 --rc geninfo_all_blocks=1 00:06:58.780 --rc geninfo_unexecuted_blocks=1 00:06:58.780 00:06:58.780 ' 00:06:58.780 23:51:05 spdk_dd -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:58.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.780 --rc genhtml_branch_coverage=1 00:06:58.781 --rc genhtml_function_coverage=1 00:06:58.781 --rc genhtml_legend=1 00:06:58.781 --rc geninfo_all_blocks=1 00:06:58.781 --rc geninfo_unexecuted_blocks=1 00:06:58.781 00:06:58.781 ' 00:06:58.781 23:51:05 spdk_dd -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:58.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.781 --rc genhtml_branch_coverage=1 00:06:58.781 --rc genhtml_function_coverage=1 00:06:58.781 --rc genhtml_legend=1 00:06:58.781 --rc geninfo_all_blocks=1 00:06:58.781 --rc geninfo_unexecuted_blocks=1 00:06:58.781 00:06:58.781 ' 00:06:58.781 23:51:05 spdk_dd -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:58.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.781 --rc genhtml_branch_coverage=1 00:06:58.781 --rc genhtml_function_coverage=1 00:06:58.781 --rc genhtml_legend=1 00:06:58.781 --rc geninfo_all_blocks=1 00:06:58.781 --rc geninfo_unexecuted_blocks=1 00:06:58.781 00:06:58.781 ' 00:06:58.781 23:51:05 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:58.781 23:51:05 spdk_dd -- scripts/common.sh@15 -- # shopt -s extglob 00:06:58.781 23:51:05 spdk_dd -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:58.781 23:51:05 spdk_dd -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:58.781 23:51:05 spdk_dd -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:58.781 23:51:05 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.781 23:51:05 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.781 23:51:05 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.781 23:51:05 spdk_dd -- paths/export.sh@5 -- # export PATH 00:06:58.781 23:51:05 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.781 23:51:05 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:59.040 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:59.040 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:59.040 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:59.040 23:51:05 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:06:59.040 23:51:05 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:06:59.040 23:51:05 spdk_dd -- scripts/common.sh@312 -- # local bdf bdfs 00:06:59.040 23:51:05 spdk_dd -- scripts/common.sh@313 -- # local nvmes 00:06:59.040 23:51:05 spdk_dd -- scripts/common.sh@315 -- # [[ -n '' ]] 00:06:59.040 23:51:05 spdk_dd -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:06:59.040 23:51:05 spdk_dd -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:06:59.040 23:51:05 spdk_dd -- scripts/common.sh@298 -- # local bdf= 00:06:59.040 23:51:05 spdk_dd -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:06:59.040 23:51:05 spdk_dd -- scripts/common.sh@233 -- # local class 00:06:59.040 23:51:05 spdk_dd -- scripts/common.sh@234 -- # local subclass 00:06:59.040 23:51:05 spdk_dd -- scripts/common.sh@235 -- # local progif 00:06:59.040 23:51:05 spdk_dd -- scripts/common.sh@236 -- # printf %02x 1 00:06:59.040 23:51:05 spdk_dd -- scripts/common.sh@236 -- # class=01 00:06:59.040 23:51:05 spdk_dd -- scripts/common.sh@237 -- # printf %02x 8 00:06:59.040 23:51:05 spdk_dd -- scripts/common.sh@237 -- # subclass=08 00:06:59.040 23:51:05 spdk_dd -- scripts/common.sh@238 -- # printf %02x 2 00:06:59.040 23:51:05 spdk_dd -- scripts/common.sh@238 -- # progif=02 00:06:59.040 23:51:05 spdk_dd -- scripts/common.sh@240 -- # hash lspci 00:06:59.040 23:51:05 spdk_dd -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:06:59.040 23:51:05 spdk_dd -- scripts/common.sh@243 -- # grep -i -- -p02 00:06:59.040 23:51:05 spdk_dd -- scripts/common.sh@242 -- # lspci -mm -n -D 00:06:59.040 23:51:05 spdk_dd -- scripts/common.sh@245 -- # tr -d '"' 00:06:59.040 23:51:05 spdk_dd -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:06:59.300 23:51:05 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:06:59.300 23:51:05 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:06:59.300 23:51:05 spdk_dd -- scripts/common.sh@18 -- # local i 00:06:59.300 23:51:05 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:06:59.300 23:51:05 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:06:59.300 23:51:05 spdk_dd -- scripts/common.sh@27 -- # return 0 00:06:59.300 23:51:05 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:06:59.300 23:51:05 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:06:59.300 23:51:05 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:06:59.300 23:51:05 spdk_dd -- scripts/common.sh@18 -- # local i 00:06:59.300 23:51:05 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:06:59.300 23:51:05 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:06:59.300 23:51:05 spdk_dd -- scripts/common.sh@27 -- # return 0 00:06:59.300 23:51:05 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:06:59.300 23:51:05 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:06:59.300 23:51:05 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:06:59.300 23:51:05 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:06:59.300 23:51:05 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:06:59.300 23:51:05 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:06:59.300 23:51:05 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:06:59.300 23:51:05 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:06:59.300 23:51:05 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:06:59.300 23:51:05 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:06:59.300 23:51:05 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:06:59.300 23:51:05 spdk_dd -- scripts/common.sh@328 -- # (( 2 )) 00:06:59.300 23:51:05 spdk_dd -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:06:59.300 23:51:05 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:06:59.300 23:51:05 spdk_dd -- dd/common.sh@139 -- # local lib 00:06:59.300 23:51:05 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:06:59.300 23:51:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:59.300 23:51:05 spdk_dd -- dd/common.sh@137 -- # objdump -p /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:59.300 23:51:05 spdk_dd -- dd/common.sh@137 -- # grep NEEDED 00:06:59.300 23:51:05 spdk_dd -- dd/common.sh@143 -- # [[ libasan.so.8 == liburing.so.* ]] 00:06:59.300 23:51:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:59.300 23:51:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:06:59.300 23:51:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:59.300 23:51:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:06:59.300 23:51:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.1 == liburing.so.* ]] 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.10.0 == liburing.so.* ]] 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.11.0 == liburing.so.* ]] 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_lvol.so.10.0 == liburing.so.* ]] 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob.so.11.0 == liburing.so.* ]] 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_nvme.so.15.0 == liburing.so.* ]] 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_provider.so.7.0 == liburing.so.* ]] 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_utils.so.1.0 == liburing.so.* ]] 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.1 == liburing.so.* ]] 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.15.1 == liburing.so.* ]] 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfu_device.so.3.0 == liburing.so.* ]] 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scsi.so.9.0 == liburing.so.* ]] 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfu_tgt.so.3.0 == liburing.so.* ]] 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fuse_dispatcher.so.1.0 == liburing.so.* ]] 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.2.0 == liburing.so.* ]] 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_linux.so.1.0 == liburing.so.* ]] 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev_aio.so.1.0 == liburing.so.* ]] 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev.so.2.0 == liburing.so.* ]] 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event.so.14.0 == liburing.so.* ]] 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev.so.17.0 == liburing.so.* ]] 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel.so.16.0 == liburing.so.* ]] 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_dma.so.5.0 == liburing.so.* ]] 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock.so.10.0 == liburing.so.* ]] 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_init.so.6.0 == liburing.so.* ]] 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_thread.so.11.0 == liburing.so.* ]] 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_trace.so.11.0 == liburing.so.* ]] 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring.so.2.0 == liburing.so.* ]] 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_util.so.10.1 == liburing.so.* ]] 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_log.so.7.1 == liburing.so.* ]] 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:06:59.301 23:51:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:59.302 23:51:05 spdk_dd -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:06:59.302 23:51:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:59.302 23:51:05 spdk_dd -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:06:59.302 23:51:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:59.302 23:51:05 spdk_dd -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:06:59.302 23:51:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:59.302 23:51:05 spdk_dd -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:06:59.302 23:51:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:59.302 23:51:05 spdk_dd -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:06:59.302 23:51:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:59.302 23:51:05 spdk_dd -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:06:59.302 23:51:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:59.302 23:51:05 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:06:59.302 23:51:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:59.302 23:51:05 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:06:59.302 23:51:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:59.302 23:51:05 spdk_dd -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:06:59.302 23:51:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:59.302 23:51:05 spdk_dd -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:06:59.302 23:51:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:59.302 23:51:05 spdk_dd -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:06:59.302 23:51:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:59.302 23:51:05 spdk_dd -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:06:59.302 23:51:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:59.302 23:51:05 spdk_dd -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:06:59.302 23:51:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:59.302 23:51:05 spdk_dd -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:06:59.302 23:51:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:59.302 23:51:05 spdk_dd -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:06:59.302 23:51:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:59.302 23:51:05 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:06:59.302 23:51:05 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:06:59.302 * spdk_dd linked to liburing 00:06:59.302 23:51:05 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:06:59.302 23:51:05 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:06:59.302 23:51:05 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:59.302 23:51:05 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:06:59.302 23:51:05 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:59.302 23:51:05 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:59.302 23:51:05 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:06:59.302 23:51:05 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:59.302 23:51:05 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:59.302 23:51:05 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:59.302 23:51:05 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:59.302 23:51:05 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:59.302 23:51:05 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:59.302 23:51:05 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:59.302 23:51:05 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:59.302 23:51:05 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:59.302 23:51:05 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:59.302 23:51:05 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:59.302 23:51:05 spdk_dd -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:06:59.302 23:51:05 spdk_dd -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:06:59.302 23:51:05 spdk_dd -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:59.302 23:51:05 spdk_dd -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:06:59.302 23:51:05 spdk_dd -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:06:59.302 23:51:05 spdk_dd -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:06:59.302 23:51:05 spdk_dd -- common/build_config.sh@23 -- # CONFIG_CET=n 00:06:59.302 23:51:05 spdk_dd -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:59.302 23:51:05 spdk_dd -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:06:59.302 23:51:05 spdk_dd -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:06:59.302 23:51:05 spdk_dd -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:06:59.302 23:51:05 spdk_dd -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:59.302 23:51:05 spdk_dd -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:59.302 23:51:05 spdk_dd -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:06:59.302 23:51:05 spdk_dd -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:06:59.302 23:51:05 spdk_dd -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:06:59.302 23:51:05 spdk_dd -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:06:59.302 23:51:05 spdk_dd -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:06:59.302 23:51:05 spdk_dd -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:06:59.302 23:51:05 spdk_dd -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:06:59.302 23:51:05 spdk_dd -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:06:59.302 23:51:05 spdk_dd -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:06:59.302 23:51:05 spdk_dd -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:06:59.302 23:51:05 spdk_dd -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:06:59.302 23:51:05 spdk_dd -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:06:59.302 23:51:05 spdk_dd -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:06:59.302 23:51:05 spdk_dd -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:06:59.302 23:51:05 spdk_dd -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:06:59.302 23:51:05 spdk_dd -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:06:59.302 23:51:05 spdk_dd -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:06:59.302 23:51:05 spdk_dd -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:59.302 23:51:05 spdk_dd -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:06:59.302 23:51:05 spdk_dd -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:06:59.302 23:51:05 spdk_dd -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:06:59.302 23:51:05 spdk_dd -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:06:59.302 23:51:05 spdk_dd -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:06:59.302 23:51:05 spdk_dd -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:06:59.302 23:51:05 spdk_dd -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:59.302 23:51:05 spdk_dd -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:06:59.302 23:51:05 spdk_dd -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:06:59.302 23:51:05 spdk_dd -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:06:59.302 23:51:05 spdk_dd -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:06:59.302 23:51:05 spdk_dd -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:06:59.302 23:51:05 spdk_dd -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=y 00:06:59.302 23:51:05 spdk_dd -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:06:59.302 23:51:05 spdk_dd -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:06:59.302 23:51:05 spdk_dd -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:06:59.302 23:51:05 spdk_dd -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:06:59.302 23:51:05 spdk_dd -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:06:59.302 23:51:05 spdk_dd -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:06:59.302 23:51:05 spdk_dd -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:06:59.302 23:51:05 spdk_dd -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:06:59.302 23:51:05 spdk_dd -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:06:59.302 23:51:05 spdk_dd -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:06:59.302 23:51:05 spdk_dd -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:06:59.302 23:51:05 spdk_dd -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:06:59.302 23:51:05 spdk_dd -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:06:59.302 23:51:05 spdk_dd -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:06:59.302 23:51:05 spdk_dd -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:59.302 23:51:05 spdk_dd -- common/build_config.sh@76 -- # CONFIG_FC=n 00:06:59.302 23:51:05 spdk_dd -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:06:59.302 23:51:05 spdk_dd -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:06:59.302 23:51:05 spdk_dd -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:06:59.302 23:51:05 spdk_dd -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:06:59.302 23:51:05 spdk_dd -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:06:59.302 23:51:05 spdk_dd -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:06:59.302 23:51:05 spdk_dd -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:06:59.302 23:51:05 spdk_dd -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:06:59.302 23:51:05 spdk_dd -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:06:59.302 23:51:05 spdk_dd -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:06:59.302 23:51:05 spdk_dd -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:59.302 23:51:05 spdk_dd -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:06:59.302 23:51:05 spdk_dd -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:06:59.302 23:51:05 spdk_dd -- common/build_config.sh@90 -- # CONFIG_URING=y 00:06:59.302 23:51:05 spdk_dd -- dd/common.sh@149 -- # [[ y != y ]] 00:06:59.302 23:51:05 spdk_dd -- dd/common.sh@152 -- # export liburing_in_use=1 00:06:59.302 23:51:05 spdk_dd -- dd/common.sh@152 -- # liburing_in_use=1 00:06:59.302 23:51:05 spdk_dd -- dd/common.sh@153 -- # return 0 00:06:59.302 23:51:05 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:06:59.302 23:51:05 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:06:59.302 23:51:05 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:59.302 23:51:05 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:59.302 23:51:05 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:59.302 ************************************ 00:06:59.302 START TEST spdk_dd_basic_rw 00:06:59.302 ************************************ 00:06:59.302 23:51:05 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:06:59.302 * Looking for test storage... 00:06:59.302 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:59.302 23:51:05 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:59.302 23:51:05 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1693 -- # lcov --version 00:06:59.303 23:51:05 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:59.562 23:51:05 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:59.562 23:51:05 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:59.562 23:51:05 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:59.562 23:51:05 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:59.562 23:51:05 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # IFS=.-: 00:06:59.562 23:51:05 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # read -ra ver1 00:06:59.562 23:51:05 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # IFS=.-: 00:06:59.562 23:51:05 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # read -ra ver2 00:06:59.562 23:51:05 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@338 -- # local 'op=<' 00:06:59.562 23:51:05 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@340 -- # ver1_l=2 00:06:59.562 23:51:05 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@341 -- # ver2_l=1 00:06:59.562 23:51:05 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:59.562 23:51:05 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@344 -- # case "$op" in 00:06:59.562 23:51:05 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@345 -- # : 1 00:06:59.562 23:51:05 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:59.562 23:51:05 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:59.562 23:51:05 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # decimal 1 00:06:59.562 23:51:05 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=1 00:06:59.562 23:51:05 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:59.562 23:51:05 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 1 00:06:59.562 23:51:06 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # ver1[v]=1 00:06:59.562 23:51:06 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # decimal 2 00:06:59.562 23:51:06 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=2 00:06:59.562 23:51:06 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:59.562 23:51:06 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 2 00:06:59.562 23:51:06 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # ver2[v]=2 00:06:59.562 23:51:06 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:59.562 23:51:06 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:59.562 23:51:06 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # return 0 00:06:59.562 23:51:06 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:59.562 23:51:06 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:59.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.562 --rc genhtml_branch_coverage=1 00:06:59.562 --rc genhtml_function_coverage=1 00:06:59.562 --rc genhtml_legend=1 00:06:59.562 --rc geninfo_all_blocks=1 00:06:59.562 --rc geninfo_unexecuted_blocks=1 00:06:59.562 00:06:59.562 ' 00:06:59.562 23:51:06 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:59.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.562 --rc genhtml_branch_coverage=1 00:06:59.563 --rc genhtml_function_coverage=1 00:06:59.563 --rc genhtml_legend=1 00:06:59.563 --rc geninfo_all_blocks=1 00:06:59.563 --rc geninfo_unexecuted_blocks=1 00:06:59.563 00:06:59.563 ' 00:06:59.563 23:51:06 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:59.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.563 --rc genhtml_branch_coverage=1 00:06:59.563 --rc genhtml_function_coverage=1 00:06:59.563 --rc genhtml_legend=1 00:06:59.563 --rc geninfo_all_blocks=1 00:06:59.563 --rc geninfo_unexecuted_blocks=1 00:06:59.563 00:06:59.563 ' 00:06:59.563 23:51:06 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:59.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.563 --rc genhtml_branch_coverage=1 00:06:59.563 --rc genhtml_function_coverage=1 00:06:59.563 --rc genhtml_legend=1 00:06:59.563 --rc geninfo_all_blocks=1 00:06:59.563 --rc geninfo_unexecuted_blocks=1 00:06:59.563 00:06:59.563 ' 00:06:59.563 23:51:06 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:59.563 23:51:06 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@15 -- # shopt -s extglob 00:06:59.563 23:51:06 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:59.563 23:51:06 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:59.563 23:51:06 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:59.563 23:51:06 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.563 23:51:06 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.563 23:51:06 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.563 23:51:06 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:06:59.563 23:51:06 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.563 23:51:06 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:06:59.563 23:51:06 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:06:59.563 23:51:06 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:06:59.563 23:51:06 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:06:59.563 23:51:06 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:06:59.563 23:51:06 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:06:59.563 23:51:06 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:06:59.563 23:51:06 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:59.563 23:51:06 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:59.563 23:51:06 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:06:59.563 23:51:06 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:06:59.563 23:51:06 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:06:59.563 23:51:06 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:06:59.824 23:51:06 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:06:59.824 23:51:06 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:06:59.825 23:51:06 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:06:59.825 23:51:06 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:06:59.825 23:51:06 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:06:59.825 23:51:06 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:06:59.825 23:51:06 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:06:59.825 23:51:06 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:59.825 23:51:06 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:06:59.825 23:51:06 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:59.825 23:51:06 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:59.825 23:51:06 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:59.825 23:51:06 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:59.825 23:51:06 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:59.825 ************************************ 00:06:59.825 START TEST dd_bs_lt_native_bs 00:06:59.825 ************************************ 00:06:59.825 23:51:06 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1129 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:59.825 23:51:06 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@652 -- # local es=0 00:06:59.825 23:51:06 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:59.825 23:51:06 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:59.825 23:51:06 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:59.825 23:51:06 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:59.825 23:51:06 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:59.825 23:51:06 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:59.825 23:51:06 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:59.825 23:51:06 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:59.825 23:51:06 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:59.825 23:51:06 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:59.825 { 00:06:59.825 "subsystems": [ 00:06:59.825 { 00:06:59.825 "subsystem": "bdev", 00:06:59.825 "config": [ 00:06:59.826 { 00:06:59.826 "params": { 00:06:59.826 "trtype": "pcie", 00:06:59.826 "traddr": "0000:00:10.0", 00:06:59.826 "name": "Nvme0" 00:06:59.826 }, 00:06:59.826 "method": "bdev_nvme_attach_controller" 00:06:59.826 }, 00:06:59.826 { 00:06:59.826 "method": "bdev_wait_for_examine" 00:06:59.826 } 00:06:59.826 ] 00:06:59.826 } 00:06:59.826 ] 00:06:59.826 } 00:06:59.826 [2024-11-18 23:51:06.482565] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:06:59.826 [2024-11-18 23:51:06.483534] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60897 ] 00:07:00.085 [2024-11-18 23:51:06.667364] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.343 [2024-11-18 23:51:06.792038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.343 [2024-11-18 23:51:06.974182] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:00.602 [2024-11-18 23:51:07.133182] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:07:00.602 [2024-11-18 23:51:07.133429] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:01.170 [2024-11-18 23:51:07.567591] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:01.170 23:51:07 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@655 -- # es=234 00:07:01.171 23:51:07 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:01.171 23:51:07 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@664 -- # es=106 00:07:01.171 23:51:07 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@665 -- # case "$es" in 00:07:01.171 23:51:07 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@672 -- # es=1 00:07:01.171 23:51:07 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:01.171 00:07:01.171 real 0m1.413s 00:07:01.171 user 0m1.161s 00:07:01.171 sys 0m0.203s 00:07:01.171 ************************************ 00:07:01.171 END TEST dd_bs_lt_native_bs 00:07:01.171 ************************************ 00:07:01.171 23:51:07 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:01.171 23:51:07 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:07:01.171 23:51:07 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:07:01.171 23:51:07 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:01.171 23:51:07 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:01.171 23:51:07 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:01.171 ************************************ 00:07:01.171 START TEST dd_rw 00:07:01.171 ************************************ 00:07:01.171 23:51:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1129 -- # basic_rw 4096 00:07:01.171 23:51:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:07:01.171 23:51:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:07:01.171 23:51:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:07:01.171 23:51:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:07:01.171 23:51:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:07:01.171 23:51:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:07:01.171 23:51:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:07:01.171 23:51:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:07:01.171 23:51:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:07:01.171 23:51:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:07:01.171 23:51:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:07:01.171 23:51:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:01.171 23:51:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:07:01.171 23:51:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:07:01.171 23:51:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:07:01.171 23:51:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:07:01.171 23:51:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:01.171 23:51:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:02.107 23:51:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:07:02.107 23:51:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:02.107 23:51:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:02.107 23:51:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:02.107 { 00:07:02.107 "subsystems": [ 00:07:02.107 { 00:07:02.107 "subsystem": "bdev", 00:07:02.107 "config": [ 00:07:02.107 { 00:07:02.107 "params": { 00:07:02.107 "trtype": "pcie", 00:07:02.108 "traddr": "0000:00:10.0", 00:07:02.108 "name": "Nvme0" 00:07:02.108 }, 00:07:02.108 "method": "bdev_nvme_attach_controller" 00:07:02.108 }, 00:07:02.108 { 00:07:02.108 "method": "bdev_wait_for_examine" 00:07:02.108 } 00:07:02.108 ] 00:07:02.108 } 00:07:02.108 ] 00:07:02.108 } 00:07:02.108 [2024-11-18 23:51:08.540341] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:07:02.108 [2024-11-18 23:51:08.540738] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60940 ] 00:07:02.108 [2024-11-18 23:51:08.718557] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.367 [2024-11-18 23:51:08.801177] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.367 [2024-11-18 23:51:08.949235] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:02.626  [2024-11-18T23:51:10.253Z] Copying: 60/60 [kB] (average 19 MBps) 00:07:03.561 00:07:03.561 23:51:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:03.561 23:51:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:07:03.561 23:51:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:03.561 23:51:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:03.561 { 00:07:03.561 "subsystems": [ 00:07:03.561 { 00:07:03.561 "subsystem": "bdev", 00:07:03.561 "config": [ 00:07:03.561 { 00:07:03.561 "params": { 00:07:03.561 "trtype": "pcie", 00:07:03.561 "traddr": "0000:00:10.0", 00:07:03.561 "name": "Nvme0" 00:07:03.561 }, 00:07:03.561 "method": "bdev_nvme_attach_controller" 00:07:03.561 }, 00:07:03.562 { 00:07:03.562 "method": "bdev_wait_for_examine" 00:07:03.562 } 00:07:03.562 ] 00:07:03.562 } 00:07:03.562 ] 00:07:03.562 } 00:07:03.562 [2024-11-18 23:51:10.038900] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:07:03.562 [2024-11-18 23:51:10.039581] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60960 ] 00:07:03.562 [2024-11-18 23:51:10.218427] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.821 [2024-11-18 23:51:10.315555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.821 [2024-11-18 23:51:10.475743] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:04.080  [2024-11-18T23:51:11.341Z] Copying: 60/60 [kB] (average 19 MBps) 00:07:04.649 00:07:04.649 23:51:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:04.649 23:51:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:07:04.649 23:51:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:04.649 23:51:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:04.649 23:51:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:07:04.649 23:51:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:04.649 23:51:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:04.649 23:51:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:04.649 23:51:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:04.649 23:51:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:04.649 23:51:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:04.908 { 00:07:04.908 "subsystems": [ 00:07:04.908 { 00:07:04.908 "subsystem": "bdev", 00:07:04.908 "config": [ 00:07:04.908 { 00:07:04.908 "params": { 00:07:04.908 "trtype": "pcie", 00:07:04.908 "traddr": "0000:00:10.0", 00:07:04.908 "name": "Nvme0" 00:07:04.908 }, 00:07:04.908 "method": "bdev_nvme_attach_controller" 00:07:04.908 }, 00:07:04.908 { 00:07:04.908 "method": "bdev_wait_for_examine" 00:07:04.908 } 00:07:04.908 ] 00:07:04.908 } 00:07:04.908 ] 00:07:04.908 } 00:07:04.908 [2024-11-18 23:51:11.433440] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:07:04.908 [2024-11-18 23:51:11.433628] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60992 ] 00:07:05.168 [2024-11-18 23:51:11.611201] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.168 [2024-11-18 23:51:11.696953] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.168 [2024-11-18 23:51:11.844974] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:05.427  [2024-11-18T23:51:13.056Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:06.364 00:07:06.364 23:51:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:06.364 23:51:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:07:06.364 23:51:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:07:06.364 23:51:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:07:06.364 23:51:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:07:06.364 23:51:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:06.364 23:51:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:06.933 23:51:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:07:06.933 23:51:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:06.933 23:51:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:06.933 23:51:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:06.933 { 00:07:06.933 "subsystems": [ 00:07:06.933 { 00:07:06.933 "subsystem": "bdev", 00:07:06.933 "config": [ 00:07:06.933 { 00:07:06.933 "params": { 00:07:06.933 "trtype": "pcie", 00:07:06.933 "traddr": "0000:00:10.0", 00:07:06.933 "name": "Nvme0" 00:07:06.933 }, 00:07:06.933 "method": "bdev_nvme_attach_controller" 00:07:06.933 }, 00:07:06.933 { 00:07:06.933 "method": "bdev_wait_for_examine" 00:07:06.933 } 00:07:06.933 ] 00:07:06.933 } 00:07:06.933 ] 00:07:06.933 } 00:07:06.933 [2024-11-18 23:51:13.480586] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:07:06.933 [2024-11-18 23:51:13.480884] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61019 ] 00:07:07.193 [2024-11-18 23:51:13.643680] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.193 [2024-11-18 23:51:13.725479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.193 [2024-11-18 23:51:13.870455] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:07.452  [2024-11-18T23:51:14.712Z] Copying: 60/60 [kB] (average 58 MBps) 00:07:08.020 00:07:08.280 23:51:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:07:08.280 23:51:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:08.280 23:51:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:08.280 23:51:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:08.280 { 00:07:08.280 "subsystems": [ 00:07:08.280 { 00:07:08.280 "subsystem": "bdev", 00:07:08.280 "config": [ 00:07:08.280 { 00:07:08.280 "params": { 00:07:08.280 "trtype": "pcie", 00:07:08.280 "traddr": "0000:00:10.0", 00:07:08.280 "name": "Nvme0" 00:07:08.280 }, 00:07:08.280 "method": "bdev_nvme_attach_controller" 00:07:08.280 }, 00:07:08.280 { 00:07:08.280 "method": "bdev_wait_for_examine" 00:07:08.280 } 00:07:08.280 ] 00:07:08.280 } 00:07:08.280 ] 00:07:08.280 } 00:07:08.280 [2024-11-18 23:51:14.815763] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:07:08.280 [2024-11-18 23:51:14.815921] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61044 ] 00:07:08.539 [2024-11-18 23:51:14.991324] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.539 [2024-11-18 23:51:15.075116] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.539 [2024-11-18 23:51:15.220668] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:08.799  [2024-11-18T23:51:16.430Z] Copying: 60/60 [kB] (average 58 MBps) 00:07:09.738 00:07:09.738 23:51:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:09.738 23:51:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:07:09.738 23:51:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:09.738 23:51:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:09.738 23:51:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:07:09.738 23:51:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:09.738 23:51:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:09.738 23:51:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:09.738 23:51:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:09.738 23:51:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:09.738 23:51:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:09.738 { 00:07:09.738 "subsystems": [ 00:07:09.738 { 00:07:09.738 "subsystem": "bdev", 00:07:09.738 "config": [ 00:07:09.738 { 00:07:09.738 "params": { 00:07:09.738 "trtype": "pcie", 00:07:09.738 "traddr": "0000:00:10.0", 00:07:09.738 "name": "Nvme0" 00:07:09.738 }, 00:07:09.738 "method": "bdev_nvme_attach_controller" 00:07:09.738 }, 00:07:09.738 { 00:07:09.738 "method": "bdev_wait_for_examine" 00:07:09.738 } 00:07:09.738 ] 00:07:09.738 } 00:07:09.738 ] 00:07:09.738 } 00:07:09.738 [2024-11-18 23:51:16.339764] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:07:09.738 [2024-11-18 23:51:16.340189] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61066 ] 00:07:09.998 [2024-11-18 23:51:16.518894] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.998 [2024-11-18 23:51:16.601816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.258 [2024-11-18 23:51:16.756002] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:10.258  [2024-11-18T23:51:17.889Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:11.197 00:07:11.197 23:51:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:07:11.197 23:51:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:11.197 23:51:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:07:11.197 23:51:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:07:11.197 23:51:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:07:11.197 23:51:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:07:11.197 23:51:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:11.197 23:51:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:11.456 23:51:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:07:11.456 23:51:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:11.456 23:51:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:11.456 23:51:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:11.716 { 00:07:11.716 "subsystems": [ 00:07:11.716 { 00:07:11.716 "subsystem": "bdev", 00:07:11.716 "config": [ 00:07:11.716 { 00:07:11.716 "params": { 00:07:11.716 "trtype": "pcie", 00:07:11.716 "traddr": "0000:00:10.0", 00:07:11.716 "name": "Nvme0" 00:07:11.716 }, 00:07:11.716 "method": "bdev_nvme_attach_controller" 00:07:11.716 }, 00:07:11.716 { 00:07:11.716 "method": "bdev_wait_for_examine" 00:07:11.716 } 00:07:11.716 ] 00:07:11.716 } 00:07:11.716 ] 00:07:11.716 } 00:07:11.716 [2024-11-18 23:51:18.177646] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:07:11.716 [2024-11-18 23:51:18.177923] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61097 ] 00:07:11.716 [2024-11-18 23:51:18.339358] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.976 [2024-11-18 23:51:18.439835] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.976 [2024-11-18 23:51:18.597347] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:12.236  [2024-11-18T23:51:19.864Z] Copying: 56/56 [kB] (average 27 MBps) 00:07:13.172 00:07:13.172 23:51:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:07:13.172 23:51:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:13.172 23:51:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:13.172 23:51:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:13.172 { 00:07:13.172 "subsystems": [ 00:07:13.172 { 00:07:13.172 "subsystem": "bdev", 00:07:13.172 "config": [ 00:07:13.172 { 00:07:13.172 "params": { 00:07:13.172 "trtype": "pcie", 00:07:13.172 "traddr": "0000:00:10.0", 00:07:13.172 "name": "Nvme0" 00:07:13.172 }, 00:07:13.172 "method": "bdev_nvme_attach_controller" 00:07:13.172 }, 00:07:13.172 { 00:07:13.172 "method": "bdev_wait_for_examine" 00:07:13.172 } 00:07:13.172 ] 00:07:13.172 } 00:07:13.172 ] 00:07:13.172 } 00:07:13.172 [2024-11-18 23:51:19.698617] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:07:13.172 [2024-11-18 23:51:19.698807] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61123 ] 00:07:13.431 [2024-11-18 23:51:19.876797] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.431 [2024-11-18 23:51:19.959506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.431 [2024-11-18 23:51:20.110090] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:13.689  [2024-11-18T23:51:20.949Z] Copying: 56/56 [kB] (average 18 MBps) 00:07:14.257 00:07:14.257 23:51:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:14.257 23:51:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:07:14.257 23:51:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:14.257 23:51:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:14.257 23:51:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:07:14.257 23:51:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:14.257 23:51:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:14.257 23:51:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:14.257 23:51:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:14.257 23:51:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:14.257 23:51:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:14.516 { 00:07:14.516 "subsystems": [ 00:07:14.516 { 00:07:14.516 "subsystem": "bdev", 00:07:14.516 "config": [ 00:07:14.516 { 00:07:14.516 "params": { 00:07:14.516 "trtype": "pcie", 00:07:14.516 "traddr": "0000:00:10.0", 00:07:14.516 "name": "Nvme0" 00:07:14.516 }, 00:07:14.516 "method": "bdev_nvme_attach_controller" 00:07:14.516 }, 00:07:14.516 { 00:07:14.516 "method": "bdev_wait_for_examine" 00:07:14.516 } 00:07:14.516 ] 00:07:14.516 } 00:07:14.516 ] 00:07:14.516 } 00:07:14.516 [2024-11-18 23:51:21.051850] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:07:14.516 [2024-11-18 23:51:21.052231] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61153 ] 00:07:14.775 [2024-11-18 23:51:21.228165] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.776 [2024-11-18 23:51:21.310424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.776 [2024-11-18 23:51:21.455346] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:15.035  [2024-11-18T23:51:22.664Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:15.972 00:07:15.972 23:51:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:15.972 23:51:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:07:15.972 23:51:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:07:15.972 23:51:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:07:15.972 23:51:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:07:15.972 23:51:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:15.972 23:51:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:16.538 23:51:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:07:16.538 23:51:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:16.538 23:51:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:16.538 23:51:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:16.538 { 00:07:16.538 "subsystems": [ 00:07:16.538 { 00:07:16.538 "subsystem": "bdev", 00:07:16.538 "config": [ 00:07:16.538 { 00:07:16.538 "params": { 00:07:16.538 "trtype": "pcie", 00:07:16.538 "traddr": "0000:00:10.0", 00:07:16.538 "name": "Nvme0" 00:07:16.538 }, 00:07:16.538 "method": "bdev_nvme_attach_controller" 00:07:16.538 }, 00:07:16.538 { 00:07:16.538 "method": "bdev_wait_for_examine" 00:07:16.538 } 00:07:16.538 ] 00:07:16.538 } 00:07:16.538 ] 00:07:16.538 } 00:07:16.538 [2024-11-18 23:51:23.060627] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:07:16.538 [2024-11-18 23:51:23.060797] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61183 ] 00:07:16.538 [2024-11-18 23:51:23.223246] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.796 [2024-11-18 23:51:23.305365] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.796 [2024-11-18 23:51:23.456048] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:17.055  [2024-11-18T23:51:24.315Z] Copying: 56/56 [kB] (average 54 MBps) 00:07:17.623 00:07:17.623 23:51:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:17.623 23:51:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:07:17.623 23:51:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:17.623 23:51:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:17.882 { 00:07:17.882 "subsystems": [ 00:07:17.882 { 00:07:17.882 "subsystem": "bdev", 00:07:17.882 "config": [ 00:07:17.882 { 00:07:17.882 "params": { 00:07:17.882 "trtype": "pcie", 00:07:17.882 "traddr": "0000:00:10.0", 00:07:17.882 "name": "Nvme0" 00:07:17.882 }, 00:07:17.882 "method": "bdev_nvme_attach_controller" 00:07:17.882 }, 00:07:17.882 { 00:07:17.882 "method": "bdev_wait_for_examine" 00:07:17.882 } 00:07:17.882 ] 00:07:17.882 } 00:07:17.882 ] 00:07:17.882 } 00:07:17.882 [2024-11-18 23:51:24.406036] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:07:17.882 [2024-11-18 23:51:24.406200] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61204 ] 00:07:18.141 [2024-11-18 23:51:24.586641] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.141 [2024-11-18 23:51:24.676671] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.403 [2024-11-18 23:51:24.832142] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:18.403  [2024-11-18T23:51:26.087Z] Copying: 56/56 [kB] (average 54 MBps) 00:07:19.395 00:07:19.395 23:51:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:19.395 23:51:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:07:19.395 23:51:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:19.395 23:51:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:19.395 23:51:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:07:19.395 23:51:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:19.395 23:51:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:19.395 23:51:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:19.395 23:51:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:19.395 23:51:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:19.395 23:51:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:19.395 { 00:07:19.395 "subsystems": [ 00:07:19.395 { 00:07:19.395 "subsystem": "bdev", 00:07:19.395 "config": [ 00:07:19.395 { 00:07:19.395 "params": { 00:07:19.395 "trtype": "pcie", 00:07:19.395 "traddr": "0000:00:10.0", 00:07:19.395 "name": "Nvme0" 00:07:19.395 }, 00:07:19.395 "method": "bdev_nvme_attach_controller" 00:07:19.395 }, 00:07:19.395 { 00:07:19.395 "method": "bdev_wait_for_examine" 00:07:19.395 } 00:07:19.395 ] 00:07:19.395 } 00:07:19.395 ] 00:07:19.395 } 00:07:19.395 [2024-11-18 23:51:25.982074] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:07:19.395 [2024-11-18 23:51:25.982244] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61232 ] 00:07:19.654 [2024-11-18 23:51:26.161324] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.654 [2024-11-18 23:51:26.252580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.913 [2024-11-18 23:51:26.410936] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:19.913  [2024-11-18T23:51:27.543Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:20.851 00:07:20.851 23:51:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:07:20.851 23:51:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:20.851 23:51:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:07:20.851 23:51:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:07:20.851 23:51:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:07:20.851 23:51:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:07:20.851 23:51:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:20.851 23:51:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:21.109 23:51:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:07:21.109 23:51:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:21.109 23:51:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:21.109 23:51:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:21.109 { 00:07:21.109 "subsystems": [ 00:07:21.109 { 00:07:21.109 "subsystem": "bdev", 00:07:21.109 "config": [ 00:07:21.109 { 00:07:21.109 "params": { 00:07:21.109 "trtype": "pcie", 00:07:21.109 "traddr": "0000:00:10.0", 00:07:21.109 "name": "Nvme0" 00:07:21.109 }, 00:07:21.109 "method": "bdev_nvme_attach_controller" 00:07:21.109 }, 00:07:21.109 { 00:07:21.109 "method": "bdev_wait_for_examine" 00:07:21.109 } 00:07:21.109 ] 00:07:21.109 } 00:07:21.109 ] 00:07:21.109 } 00:07:21.368 [2024-11-18 23:51:27.836891] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:07:21.368 [2024-11-18 23:51:27.837291] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61263 ] 00:07:21.368 [2024-11-18 23:51:28.016375] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.627 [2024-11-18 23:51:28.103607] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.627 [2024-11-18 23:51:28.262140] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:21.886  [2024-11-18T23:51:29.514Z] Copying: 48/48 [kB] (average 46 MBps) 00:07:22.822 00:07:22.822 23:51:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:07:22.822 23:51:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:22.822 23:51:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:22.822 23:51:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:22.822 { 00:07:22.822 "subsystems": [ 00:07:22.822 { 00:07:22.822 "subsystem": "bdev", 00:07:22.822 "config": [ 00:07:22.822 { 00:07:22.822 "params": { 00:07:22.822 "trtype": "pcie", 00:07:22.822 "traddr": "0000:00:10.0", 00:07:22.822 "name": "Nvme0" 00:07:22.822 }, 00:07:22.822 "method": "bdev_nvme_attach_controller" 00:07:22.822 }, 00:07:22.822 { 00:07:22.822 "method": "bdev_wait_for_examine" 00:07:22.822 } 00:07:22.822 ] 00:07:22.822 } 00:07:22.822 ] 00:07:22.822 } 00:07:22.822 [2024-11-18 23:51:29.376742] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:07:22.823 [2024-11-18 23:51:29.376915] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61283 ] 00:07:23.081 [2024-11-18 23:51:29.560408] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.081 [2024-11-18 23:51:29.672163] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.339 [2024-11-18 23:51:29.869908] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:23.598  [2024-11-18T23:51:30.858Z] Copying: 48/48 [kB] (average 23 MBps) 00:07:24.166 00:07:24.166 23:51:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:24.166 23:51:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:07:24.166 23:51:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:24.166 23:51:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:24.166 23:51:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:07:24.166 23:51:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:24.166 23:51:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:24.166 23:51:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:24.166 23:51:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:24.166 23:51:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:24.166 23:51:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:24.424 { 00:07:24.424 "subsystems": [ 00:07:24.424 { 00:07:24.424 "subsystem": "bdev", 00:07:24.424 "config": [ 00:07:24.424 { 00:07:24.424 "params": { 00:07:24.424 "trtype": "pcie", 00:07:24.424 "traddr": "0000:00:10.0", 00:07:24.424 "name": "Nvme0" 00:07:24.424 }, 00:07:24.424 "method": "bdev_nvme_attach_controller" 00:07:24.424 }, 00:07:24.424 { 00:07:24.424 "method": "bdev_wait_for_examine" 00:07:24.424 } 00:07:24.424 ] 00:07:24.424 } 00:07:24.424 ] 00:07:24.424 } 00:07:24.424 [2024-11-18 23:51:30.901637] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:07:24.424 [2024-11-18 23:51:30.901812] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61316 ] 00:07:24.424 [2024-11-18 23:51:31.082219] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.683 [2024-11-18 23:51:31.170361] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.683 [2024-11-18 23:51:31.321845] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:24.942  [2024-11-18T23:51:32.584Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:25.892 00:07:25.892 23:51:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:25.892 23:51:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:07:25.892 23:51:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:07:25.892 23:51:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:07:25.892 23:51:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:07:25.892 23:51:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:25.892 23:51:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:26.151 23:51:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:07:26.151 23:51:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:26.151 23:51:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:26.151 23:51:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:26.410 { 00:07:26.410 "subsystems": [ 00:07:26.410 { 00:07:26.410 "subsystem": "bdev", 00:07:26.410 "config": [ 00:07:26.410 { 00:07:26.410 "params": { 00:07:26.410 "trtype": "pcie", 00:07:26.410 "traddr": "0000:00:10.0", 00:07:26.410 "name": "Nvme0" 00:07:26.410 }, 00:07:26.410 "method": "bdev_nvme_attach_controller" 00:07:26.410 }, 00:07:26.410 { 00:07:26.411 "method": "bdev_wait_for_examine" 00:07:26.411 } 00:07:26.411 ] 00:07:26.411 } 00:07:26.411 ] 00:07:26.411 } 00:07:26.411 [2024-11-18 23:51:32.951736] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:07:26.411 [2024-11-18 23:51:32.952137] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61347 ] 00:07:26.670 [2024-11-18 23:51:33.135520] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.670 [2024-11-18 23:51:33.241681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.928 [2024-11-18 23:51:33.401618] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:26.928  [2024-11-18T23:51:34.558Z] Copying: 48/48 [kB] (average 46 MBps) 00:07:27.866 00:07:27.866 23:51:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:07:27.866 23:51:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:27.866 23:51:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:27.866 23:51:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:27.866 { 00:07:27.866 "subsystems": [ 00:07:27.866 { 00:07:27.866 "subsystem": "bdev", 00:07:27.866 "config": [ 00:07:27.866 { 00:07:27.866 "params": { 00:07:27.866 "trtype": "pcie", 00:07:27.866 "traddr": "0000:00:10.0", 00:07:27.866 "name": "Nvme0" 00:07:27.866 }, 00:07:27.866 "method": "bdev_nvme_attach_controller" 00:07:27.866 }, 00:07:27.866 { 00:07:27.866 "method": "bdev_wait_for_examine" 00:07:27.866 } 00:07:27.866 ] 00:07:27.866 } 00:07:27.866 ] 00:07:27.866 } 00:07:27.866 [2024-11-18 23:51:34.357662] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:07:27.866 [2024-11-18 23:51:34.357978] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61367 ] 00:07:27.866 [2024-11-18 23:51:34.521863] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.125 [2024-11-18 23:51:34.614846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.125 [2024-11-18 23:51:34.761576] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:28.384  [2024-11-18T23:51:36.014Z] Copying: 48/48 [kB] (average 46 MBps) 00:07:29.322 00:07:29.322 23:51:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:29.322 23:51:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:07:29.322 23:51:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:29.322 23:51:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:29.322 23:51:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:07:29.322 23:51:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:29.322 23:51:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:29.322 23:51:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:29.322 23:51:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:29.322 23:51:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:29.322 23:51:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:29.322 { 00:07:29.322 "subsystems": [ 00:07:29.322 { 00:07:29.322 "subsystem": "bdev", 00:07:29.322 "config": [ 00:07:29.322 { 00:07:29.322 "params": { 00:07:29.322 "trtype": "pcie", 00:07:29.322 "traddr": "0000:00:10.0", 00:07:29.322 "name": "Nvme0" 00:07:29.322 }, 00:07:29.322 "method": "bdev_nvme_attach_controller" 00:07:29.322 }, 00:07:29.322 { 00:07:29.322 "method": "bdev_wait_for_examine" 00:07:29.322 } 00:07:29.322 ] 00:07:29.322 } 00:07:29.322 ] 00:07:29.322 } 00:07:29.322 [2024-11-18 23:51:35.949288] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:07:29.323 [2024-11-18 23:51:35.949467] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61398 ] 00:07:29.581 [2024-11-18 23:51:36.135866] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.581 [2024-11-18 23:51:36.252973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.840 [2024-11-18 23:51:36.416581] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:30.099  [2024-11-18T23:51:37.358Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:30.666 00:07:30.666 00:07:30.666 real 0m29.437s 00:07:30.666 user 0m24.567s 00:07:30.666 sys 0m13.628s 00:07:30.666 23:51:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:30.666 ************************************ 00:07:30.666 END TEST dd_rw 00:07:30.666 ************************************ 00:07:30.666 23:51:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:30.666 23:51:37 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:07:30.666 23:51:37 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:30.666 23:51:37 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:30.666 23:51:37 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:30.666 ************************************ 00:07:30.666 START TEST dd_rw_offset 00:07:30.666 ************************************ 00:07:30.666 23:51:37 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1129 -- # basic_offset 00:07:30.666 23:51:37 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:07:30.666 23:51:37 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:07:30.666 23:51:37 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:07:30.666 23:51:37 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:30.926 23:51:37 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:07:30.926 23:51:37 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=lvek4m8nby4cab0q2xftfqltdsry8lgp4lqv6xeri2zmv8vw6x99uhuqt9wd03ds7t29eteymqgy7pf6pl3b5ds4grv7m35j6k6w2hnp42u86tcoapwm1tb9xjbe0ybkbly9i3luz18qz3ijt3t3l3h6vhvsvxu9tmlly57mde2qzfpc154e81o09uk3307nhxwn34t2nn0wfpnz4imr89qzbx98b4lh3xohket1bd7s6yjgrb4x65krco30ystgfa8vc7x21oej7ro0lajhbvjxllgw5qtrx8qdm1jxsyp6quea4rpbqqp0hpynx2ekg49zn7tlrw1cbryl3swd2ocs69khmia739c1zuc8npmh3ujn9zg94wc6lz30iejzu0n7boip2m9dyr74g9xnm53v7muu8x7iebllgke0g43ifjywip3neg5gu8xr705dsjpks5b4pgsum20ls7wamz5zz4o3k8do8c1wtynrjzpnka1e5muw0l5wf620jvesam45v2v3ad8wpd76fro0aomgewnd9i7k3gm63khehkg6o2rdkzg9fpf355yyadfv53i5cuo2pbx1jdn7edcrifa1ghybf150pct18zh84wqp4p8950c3dngdqksy4e771g1sgq271jh77vvu5y9ggj8v1zx3228df0wtwxci0bgvumiwwq8gq7spfq3jr0apcsnt3urtvamzma4dq9hymawne6flzbtp8m7dry66ldo3la44gcsxbr6gxx699n98kdnfvxwxo404xbmxouc6u6vav3e7rvpx59pchise34n1vrme55v9rcw747cd4raavob80meivubgqrb5kru18xjrf5tkrav7ximidrrz44943lteizgwvbulkjckwz0jkdq68w9m4qbsrq5b14un892wnnolzkpzuzthlr3prthx57vx51clg5vs7ec1nc55mqit92etboa1k0ylumrw8cjs2bswa1f1m2gsmjli0817ev7bmj6u0yx0po8yczquaywmzvl9igcmh48hzrxfsythbobo2wf7abf9cgk5d6vp2c15en3ybzlblxb55b9acss6ljrnepoioby14pugqbvbtst6ooj749qnu5j0fxtxyc8vnhd8a2y8ud4a5yh9o2yipqe4nud7uo47qravkbq3xu4abcds09aldwp7gcczp39kirpthyk0m8ai9k0v7akq9l6grtgf8793wjvyvvw8hi3z7lupy7cc26jp9eu00e521f8a87fojb5ilasanv588fbqv19bx3lccy848nf9xdecs7cexhekmpjg7wrum8g8dqa74kgabxf26mca56jr9pf9xsb54yhu3934sm9t5pkg328llequbia0qbsvb9t7bacyha4m7gpqjhj1rt5nblha66g1s223qa0m9jb277zmab1ajcpsw6f2v42tv3q9b8386bb7frev22o1ul62clf3vjty2ni78k9tgtscjog5dbd981gr8s59l8z57cc3pl166nlipjsajd5q8gz8uugtf5gz80caqndxghoexvyhomdn91yuu2ewwj18e4020jax1g25exp1t6hitgtsgw824lgqjcdfqx2p2pp13a8p69becztnti6rikfs5fbrgf90zzcgzp7otwopbk7wr6okbr3yqb0g9mrynl0wdc501awrorfs7946amsfj8k9ywm75j3tu4ix7d1pg2rixnljdbjksqa89dyoc41f3qz52iij8d62x6ie386vtl1a7ajydkxniwac2yo7734dsl95kksijyw72ap30v564jn0q2bn3d6qjnhjjqvj7jim1sc6szqkhnwtpb0zao4wjzqhfouecfxz0t44suf3yahetfar7mxvbfwsz4kc0594bu4gja5s950d76qnid5r2nvh24q2p8fmtamuqgdgchaqpm7ux25kyp7phoopwtzqojm9y5oq57kpp1c83138hlv383ujiidx2e45j68ex4gvdv9ms1p3r28m571awifiqs61qxxg85ieo8ryhevqlfb4iwd75zid5kfinmwd5gnumb3bmjucxnrg93aqnu5vab9g0lyhpk89ljdzk7de2htigoeldnovrb38fd3shok1ilwc58nzdgjyln123jf2y1d9m7kl0j6djwdq9guu3to3mp3a94sox922q95uwtzmyhnh4lvr1ku9vxikifwvshnhznq6cszpjjyggfcl68tto62iyl2hymx57pxuku6tkat61v6earv1pllxjsrwj9c3f7sgq6m8rzrj1nms4f7d2qiybc5guwryytxnutlie5f2x58d1hexk6boiemr17e97ok5dksei1li6t12u6wc86p6d7xm2yeo5gg9q1ne9e1r13if7b73zrtfi2u5h5gjnxdcwxsn1gm75q4d9zlicazpgz9r9msy2ehwha03x9hmlixrtvtaqnl25b025uy347uhfxj35sn4ielin7cns1qf6bltste9s20ht65m18b7yaah2wzsvuvmua4ao9b8an4vabic5u671htglq4htu5y624s8rbcoj2e4bcnsvhul5ae1eleo3z9eiu24yjy6aqhjwks38cz0t8n1c4vjo9g85dpw3xlxmc8jvyeyi0jsqewcuthph11zxqyndbi3n7ss37378fqqe6sexkndgc6z3go7md7s0br3q7yyftvtcvtr9za0d8bmnm55h0qi4y89lqcobbahups9nb88id5lkdm7vs8wbvalk3wck6ws8kpdqyn92wkueux6oydt7dtukse366m0xdimya12c6khaasr83iik3f9y1s9ppzsosai825y7i0k22pgaz8n7c64wutj530wt99ik8kqaics09y0uwe5lnrfagqwklb7kwb0yqzsy7on9me2ur3i3lcdeh2kfdga4fj3jjhzqa4lmll01jei4ienx1olpchc4frul8muo5m8e86g74xqy087iho3qu6sihbkn0g7usp8uhh63v4qa0p4vey0l4bxv31369x79ewkix0466tmsgtv09iomvno2r2xsgesadgj2sjyjrh9o7vz8lolsmoah798tod1wy3q53jgcc1s5qptikisae3oje3h8augreywmn02k65vdokd9bjq9xvwxzai4r0a76zvbqwr71bfwzjnujyfqobulbgni5xxswz132qf2piqbder7d91pybvw6u3159ptd0ck82wtfp3ppgqpvhmspnawtxzqad6w0xza6rx7qtcb10shxofh3gz0f3cfsj8qqfbwyk2a0oyhztlgqfvyy03xfy8mkdwss02i3w8mdcxxm1zdweptw8lnw9suorsioy5w3ql7mge8qa7pss4zneq3c3w5s0pumh2qpmbb48qkwb343ys9zcnzh2izfvnoua22cs3hl97aakfm05lzs6fq5556ot53hrmyjhmxe0szguv1ci8n8gbn0pdmyu6w4gaonzu7f9w1qdcu53ow8z7yboq7eo7v5gakzb1nc3f3wr1p8qezbs9zgb0jaav0sa8dub1uuwh4fux57vn8vqtpvq9rch3ixchg8ko2cc6rc5rdkjmku1ubig53yfm7oy5v5o7pihodap6ergcm22916h9hvhoyk69td0lk5z621fn5lj19chv5k4ba244qy5hk9azzn3r7pojakkb52wu5gfju5zq1gqy3tcd7mx26uc57opyit2e3376m0uyune2a98174rf5jx6pd7lyls7qexw59efm7l8jqunrxxlys8vg6sxt94rfa75szeboxwd75ygmq4niw1qcwvj85px7e8lyke3b41h61ls7kxipq8u0mmj8shsslj34qqohlfm830hpu7uk5e424g4oiv1x9vdaibkqv2e5nulu0c58djb4x784s9uudxuf2ao0md7zz02yrh9ugjdx3bvv6lwagv977ac81hs5hhjkbi55awzd86o721johlirxp65ovdq8o0gysgiu8u2y34upyp7v30t6wpreiylpot0yvua68k4ovcugmp8p4xo7o1ecerh8z90k10pm2r51iz75r 00:07:30.926 23:51:37 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:07:30.926 23:51:37 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:07:30.926 23:51:37 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:07:30.926 23:51:37 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:30.926 { 00:07:30.926 "subsystems": [ 00:07:30.926 { 00:07:30.926 "subsystem": "bdev", 00:07:30.926 "config": [ 00:07:30.926 { 00:07:30.926 "params": { 00:07:30.926 "trtype": "pcie", 00:07:30.926 "traddr": "0000:00:10.0", 00:07:30.926 "name": "Nvme0" 00:07:30.926 }, 00:07:30.926 "method": "bdev_nvme_attach_controller" 00:07:30.926 }, 00:07:30.926 { 00:07:30.926 "method": "bdev_wait_for_examine" 00:07:30.926 } 00:07:30.926 ] 00:07:30.926 } 00:07:30.926 ] 00:07:30.926 } 00:07:30.926 [2024-11-18 23:51:37.471057] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:07:30.926 [2024-11-18 23:51:37.471211] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61437 ] 00:07:31.185 [2024-11-18 23:51:37.627972] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.185 [2024-11-18 23:51:37.711583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.185 [2024-11-18 23:51:37.864156] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:31.445  [2024-11-18T23:51:39.075Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:07:32.383 00:07:32.383 23:51:38 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:07:32.383 23:51:38 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:07:32.383 23:51:38 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:07:32.383 23:51:38 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:32.383 { 00:07:32.383 "subsystems": [ 00:07:32.383 { 00:07:32.383 "subsystem": "bdev", 00:07:32.383 "config": [ 00:07:32.383 { 00:07:32.383 "params": { 00:07:32.383 "trtype": "pcie", 00:07:32.383 "traddr": "0000:00:10.0", 00:07:32.383 "name": "Nvme0" 00:07:32.383 }, 00:07:32.383 "method": "bdev_nvme_attach_controller" 00:07:32.383 }, 00:07:32.383 { 00:07:32.383 "method": "bdev_wait_for_examine" 00:07:32.383 } 00:07:32.383 ] 00:07:32.383 } 00:07:32.383 ] 00:07:32.383 } 00:07:32.641 [2024-11-18 23:51:39.092795] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:07:32.641 [2024-11-18 23:51:39.092989] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61468 ] 00:07:32.641 [2024-11-18 23:51:39.271256] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.900 [2024-11-18 23:51:39.358264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.900 [2024-11-18 23:51:39.519667] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:33.157  [2024-11-18T23:51:40.786Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:07:34.094 00:07:34.094 23:51:40 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:07:34.095 23:51:40 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ lvek4m8nby4cab0q2xftfqltdsry8lgp4lqv6xeri2zmv8vw6x99uhuqt9wd03ds7t29eteymqgy7pf6pl3b5ds4grv7m35j6k6w2hnp42u86tcoapwm1tb9xjbe0ybkbly9i3luz18qz3ijt3t3l3h6vhvsvxu9tmlly57mde2qzfpc154e81o09uk3307nhxwn34t2nn0wfpnz4imr89qzbx98b4lh3xohket1bd7s6yjgrb4x65krco30ystgfa8vc7x21oej7ro0lajhbvjxllgw5qtrx8qdm1jxsyp6quea4rpbqqp0hpynx2ekg49zn7tlrw1cbryl3swd2ocs69khmia739c1zuc8npmh3ujn9zg94wc6lz30iejzu0n7boip2m9dyr74g9xnm53v7muu8x7iebllgke0g43ifjywip3neg5gu8xr705dsjpks5b4pgsum20ls7wamz5zz4o3k8do8c1wtynrjzpnka1e5muw0l5wf620jvesam45v2v3ad8wpd76fro0aomgewnd9i7k3gm63khehkg6o2rdkzg9fpf355yyadfv53i5cuo2pbx1jdn7edcrifa1ghybf150pct18zh84wqp4p8950c3dngdqksy4e771g1sgq271jh77vvu5y9ggj8v1zx3228df0wtwxci0bgvumiwwq8gq7spfq3jr0apcsnt3urtvamzma4dq9hymawne6flzbtp8m7dry66ldo3la44gcsxbr6gxx699n98kdnfvxwxo404xbmxouc6u6vav3e7rvpx59pchise34n1vrme55v9rcw747cd4raavob80meivubgqrb5kru18xjrf5tkrav7ximidrrz44943lteizgwvbulkjckwz0jkdq68w9m4qbsrq5b14un892wnnolzkpzuzthlr3prthx57vx51clg5vs7ec1nc55mqit92etboa1k0ylumrw8cjs2bswa1f1m2gsmjli0817ev7bmj6u0yx0po8yczquaywmzvl9igcmh48hzrxfsythbobo2wf7abf9cgk5d6vp2c15en3ybzlblxb55b9acss6ljrnepoioby14pugqbvbtst6ooj749qnu5j0fxtxyc8vnhd8a2y8ud4a5yh9o2yipqe4nud7uo47qravkbq3xu4abcds09aldwp7gcczp39kirpthyk0m8ai9k0v7akq9l6grtgf8793wjvyvvw8hi3z7lupy7cc26jp9eu00e521f8a87fojb5ilasanv588fbqv19bx3lccy848nf9xdecs7cexhekmpjg7wrum8g8dqa74kgabxf26mca56jr9pf9xsb54yhu3934sm9t5pkg328llequbia0qbsvb9t7bacyha4m7gpqjhj1rt5nblha66g1s223qa0m9jb277zmab1ajcpsw6f2v42tv3q9b8386bb7frev22o1ul62clf3vjty2ni78k9tgtscjog5dbd981gr8s59l8z57cc3pl166nlipjsajd5q8gz8uugtf5gz80caqndxghoexvyhomdn91yuu2ewwj18e4020jax1g25exp1t6hitgtsgw824lgqjcdfqx2p2pp13a8p69becztnti6rikfs5fbrgf90zzcgzp7otwopbk7wr6okbr3yqb0g9mrynl0wdc501awrorfs7946amsfj8k9ywm75j3tu4ix7d1pg2rixnljdbjksqa89dyoc41f3qz52iij8d62x6ie386vtl1a7ajydkxniwac2yo7734dsl95kksijyw72ap30v564jn0q2bn3d6qjnhjjqvj7jim1sc6szqkhnwtpb0zao4wjzqhfouecfxz0t44suf3yahetfar7mxvbfwsz4kc0594bu4gja5s950d76qnid5r2nvh24q2p8fmtamuqgdgchaqpm7ux25kyp7phoopwtzqojm9y5oq57kpp1c83138hlv383ujiidx2e45j68ex4gvdv9ms1p3r28m571awifiqs61qxxg85ieo8ryhevqlfb4iwd75zid5kfinmwd5gnumb3bmjucxnrg93aqnu5vab9g0lyhpk89ljdzk7de2htigoeldnovrb38fd3shok1ilwc58nzdgjyln123jf2y1d9m7kl0j6djwdq9guu3to3mp3a94sox922q95uwtzmyhnh4lvr1ku9vxikifwvshnhznq6cszpjjyggfcl68tto62iyl2hymx57pxuku6tkat61v6earv1pllxjsrwj9c3f7sgq6m8rzrj1nms4f7d2qiybc5guwryytxnutlie5f2x58d1hexk6boiemr17e97ok5dksei1li6t12u6wc86p6d7xm2yeo5gg9q1ne9e1r13if7b73zrtfi2u5h5gjnxdcwxsn1gm75q4d9zlicazpgz9r9msy2ehwha03x9hmlixrtvtaqnl25b025uy347uhfxj35sn4ielin7cns1qf6bltste9s20ht65m18b7yaah2wzsvuvmua4ao9b8an4vabic5u671htglq4htu5y624s8rbcoj2e4bcnsvhul5ae1eleo3z9eiu24yjy6aqhjwks38cz0t8n1c4vjo9g85dpw3xlxmc8jvyeyi0jsqewcuthph11zxqyndbi3n7ss37378fqqe6sexkndgc6z3go7md7s0br3q7yyftvtcvtr9za0d8bmnm55h0qi4y89lqcobbahups9nb88id5lkdm7vs8wbvalk3wck6ws8kpdqyn92wkueux6oydt7dtukse366m0xdimya12c6khaasr83iik3f9y1s9ppzsosai825y7i0k22pgaz8n7c64wutj530wt99ik8kqaics09y0uwe5lnrfagqwklb7kwb0yqzsy7on9me2ur3i3lcdeh2kfdga4fj3jjhzqa4lmll01jei4ienx1olpchc4frul8muo5m8e86g74xqy087iho3qu6sihbkn0g7usp8uhh63v4qa0p4vey0l4bxv31369x79ewkix0466tmsgtv09iomvno2r2xsgesadgj2sjyjrh9o7vz8lolsmoah798tod1wy3q53jgcc1s5qptikisae3oje3h8augreywmn02k65vdokd9bjq9xvwxzai4r0a76zvbqwr71bfwzjnujyfqobulbgni5xxswz132qf2piqbder7d91pybvw6u3159ptd0ck82wtfp3ppgqpvhmspnawtxzqad6w0xza6rx7qtcb10shxofh3gz0f3cfsj8qqfbwyk2a0oyhztlgqfvyy03xfy8mkdwss02i3w8mdcxxm1zdweptw8lnw9suorsioy5w3ql7mge8qa7pss4zneq3c3w5s0pumh2qpmbb48qkwb343ys9zcnzh2izfvnoua22cs3hl97aakfm05lzs6fq5556ot53hrmyjhmxe0szguv1ci8n8gbn0pdmyu6w4gaonzu7f9w1qdcu53ow8z7yboq7eo7v5gakzb1nc3f3wr1p8qezbs9zgb0jaav0sa8dub1uuwh4fux57vn8vqtpvq9rch3ixchg8ko2cc6rc5rdkjmku1ubig53yfm7oy5v5o7pihodap6ergcm22916h9hvhoyk69td0lk5z621fn5lj19chv5k4ba244qy5hk9azzn3r7pojakkb52wu5gfju5zq1gqy3tcd7mx26uc57opyit2e3376m0uyune2a98174rf5jx6pd7lyls7qexw59efm7l8jqunrxxlys8vg6sxt94rfa75szeboxwd75ygmq4niw1qcwvj85px7e8lyke3b41h61ls7kxipq8u0mmj8shsslj34qqohlfm830hpu7uk5e424g4oiv1x9vdaibkqv2e5nulu0c58djb4x784s9uudxuf2ao0md7zz02yrh9ugjdx3bvv6lwagv977ac81hs5hhjkbi55awzd86o721johlirxp65ovdq8o0gysgiu8u2y34upyp7v30t6wpreiylpot0yvua68k4ovcugmp8p4xo7o1ecerh8z90k10pm2r51iz75r == \l\v\e\k\4\m\8\n\b\y\4\c\a\b\0\q\2\x\f\t\f\q\l\t\d\s\r\y\8\l\g\p\4\l\q\v\6\x\e\r\i\2\z\m\v\8\v\w\6\x\9\9\u\h\u\q\t\9\w\d\0\3\d\s\7\t\2\9\e\t\e\y\m\q\g\y\7\p\f\6\p\l\3\b\5\d\s\4\g\r\v\7\m\3\5\j\6\k\6\w\2\h\n\p\4\2\u\8\6\t\c\o\a\p\w\m\1\t\b\9\x\j\b\e\0\y\b\k\b\l\y\9\i\3\l\u\z\1\8\q\z\3\i\j\t\3\t\3\l\3\h\6\v\h\v\s\v\x\u\9\t\m\l\l\y\5\7\m\d\e\2\q\z\f\p\c\1\5\4\e\8\1\o\0\9\u\k\3\3\0\7\n\h\x\w\n\3\4\t\2\n\n\0\w\f\p\n\z\4\i\m\r\8\9\q\z\b\x\9\8\b\4\l\h\3\x\o\h\k\e\t\1\b\d\7\s\6\y\j\g\r\b\4\x\6\5\k\r\c\o\3\0\y\s\t\g\f\a\8\v\c\7\x\2\1\o\e\j\7\r\o\0\l\a\j\h\b\v\j\x\l\l\g\w\5\q\t\r\x\8\q\d\m\1\j\x\s\y\p\6\q\u\e\a\4\r\p\b\q\q\p\0\h\p\y\n\x\2\e\k\g\4\9\z\n\7\t\l\r\w\1\c\b\r\y\l\3\s\w\d\2\o\c\s\6\9\k\h\m\i\a\7\3\9\c\1\z\u\c\8\n\p\m\h\3\u\j\n\9\z\g\9\4\w\c\6\l\z\3\0\i\e\j\z\u\0\n\7\b\o\i\p\2\m\9\d\y\r\7\4\g\9\x\n\m\5\3\v\7\m\u\u\8\x\7\i\e\b\l\l\g\k\e\0\g\4\3\i\f\j\y\w\i\p\3\n\e\g\5\g\u\8\x\r\7\0\5\d\s\j\p\k\s\5\b\4\p\g\s\u\m\2\0\l\s\7\w\a\m\z\5\z\z\4\o\3\k\8\d\o\8\c\1\w\t\y\n\r\j\z\p\n\k\a\1\e\5\m\u\w\0\l\5\w\f\6\2\0\j\v\e\s\a\m\4\5\v\2\v\3\a\d\8\w\p\d\7\6\f\r\o\0\a\o\m\g\e\w\n\d\9\i\7\k\3\g\m\6\3\k\h\e\h\k\g\6\o\2\r\d\k\z\g\9\f\p\f\3\5\5\y\y\a\d\f\v\5\3\i\5\c\u\o\2\p\b\x\1\j\d\n\7\e\d\c\r\i\f\a\1\g\h\y\b\f\1\5\0\p\c\t\1\8\z\h\8\4\w\q\p\4\p\8\9\5\0\c\3\d\n\g\d\q\k\s\y\4\e\7\7\1\g\1\s\g\q\2\7\1\j\h\7\7\v\v\u\5\y\9\g\g\j\8\v\1\z\x\3\2\2\8\d\f\0\w\t\w\x\c\i\0\b\g\v\u\m\i\w\w\q\8\g\q\7\s\p\f\q\3\j\r\0\a\p\c\s\n\t\3\u\r\t\v\a\m\z\m\a\4\d\q\9\h\y\m\a\w\n\e\6\f\l\z\b\t\p\8\m\7\d\r\y\6\6\l\d\o\3\l\a\4\4\g\c\s\x\b\r\6\g\x\x\6\9\9\n\9\8\k\d\n\f\v\x\w\x\o\4\0\4\x\b\m\x\o\u\c\6\u\6\v\a\v\3\e\7\r\v\p\x\5\9\p\c\h\i\s\e\3\4\n\1\v\r\m\e\5\5\v\9\r\c\w\7\4\7\c\d\4\r\a\a\v\o\b\8\0\m\e\i\v\u\b\g\q\r\b\5\k\r\u\1\8\x\j\r\f\5\t\k\r\a\v\7\x\i\m\i\d\r\r\z\4\4\9\4\3\l\t\e\i\z\g\w\v\b\u\l\k\j\c\k\w\z\0\j\k\d\q\6\8\w\9\m\4\q\b\s\r\q\5\b\1\4\u\n\8\9\2\w\n\n\o\l\z\k\p\z\u\z\t\h\l\r\3\p\r\t\h\x\5\7\v\x\5\1\c\l\g\5\v\s\7\e\c\1\n\c\5\5\m\q\i\t\9\2\e\t\b\o\a\1\k\0\y\l\u\m\r\w\8\c\j\s\2\b\s\w\a\1\f\1\m\2\g\s\m\j\l\i\0\8\1\7\e\v\7\b\m\j\6\u\0\y\x\0\p\o\8\y\c\z\q\u\a\y\w\m\z\v\l\9\i\g\c\m\h\4\8\h\z\r\x\f\s\y\t\h\b\o\b\o\2\w\f\7\a\b\f\9\c\g\k\5\d\6\v\p\2\c\1\5\e\n\3\y\b\z\l\b\l\x\b\5\5\b\9\a\c\s\s\6\l\j\r\n\e\p\o\i\o\b\y\1\4\p\u\g\q\b\v\b\t\s\t\6\o\o\j\7\4\9\q\n\u\5\j\0\f\x\t\x\y\c\8\v\n\h\d\8\a\2\y\8\u\d\4\a\5\y\h\9\o\2\y\i\p\q\e\4\n\u\d\7\u\o\4\7\q\r\a\v\k\b\q\3\x\u\4\a\b\c\d\s\0\9\a\l\d\w\p\7\g\c\c\z\p\3\9\k\i\r\p\t\h\y\k\0\m\8\a\i\9\k\0\v\7\a\k\q\9\l\6\g\r\t\g\f\8\7\9\3\w\j\v\y\v\v\w\8\h\i\3\z\7\l\u\p\y\7\c\c\2\6\j\p\9\e\u\0\0\e\5\2\1\f\8\a\8\7\f\o\j\b\5\i\l\a\s\a\n\v\5\8\8\f\b\q\v\1\9\b\x\3\l\c\c\y\8\4\8\n\f\9\x\d\e\c\s\7\c\e\x\h\e\k\m\p\j\g\7\w\r\u\m\8\g\8\d\q\a\7\4\k\g\a\b\x\f\2\6\m\c\a\5\6\j\r\9\p\f\9\x\s\b\5\4\y\h\u\3\9\3\4\s\m\9\t\5\p\k\g\3\2\8\l\l\e\q\u\b\i\a\0\q\b\s\v\b\9\t\7\b\a\c\y\h\a\4\m\7\g\p\q\j\h\j\1\r\t\5\n\b\l\h\a\6\6\g\1\s\2\2\3\q\a\0\m\9\j\b\2\7\7\z\m\a\b\1\a\j\c\p\s\w\6\f\2\v\4\2\t\v\3\q\9\b\8\3\8\6\b\b\7\f\r\e\v\2\2\o\1\u\l\6\2\c\l\f\3\v\j\t\y\2\n\i\7\8\k\9\t\g\t\s\c\j\o\g\5\d\b\d\9\8\1\g\r\8\s\5\9\l\8\z\5\7\c\c\3\p\l\1\6\6\n\l\i\p\j\s\a\j\d\5\q\8\g\z\8\u\u\g\t\f\5\g\z\8\0\c\a\q\n\d\x\g\h\o\e\x\v\y\h\o\m\d\n\9\1\y\u\u\2\e\w\w\j\1\8\e\4\0\2\0\j\a\x\1\g\2\5\e\x\p\1\t\6\h\i\t\g\t\s\g\w\8\2\4\l\g\q\j\c\d\f\q\x\2\p\2\p\p\1\3\a\8\p\6\9\b\e\c\z\t\n\t\i\6\r\i\k\f\s\5\f\b\r\g\f\9\0\z\z\c\g\z\p\7\o\t\w\o\p\b\k\7\w\r\6\o\k\b\r\3\y\q\b\0\g\9\m\r\y\n\l\0\w\d\c\5\0\1\a\w\r\o\r\f\s\7\9\4\6\a\m\s\f\j\8\k\9\y\w\m\7\5\j\3\t\u\4\i\x\7\d\1\p\g\2\r\i\x\n\l\j\d\b\j\k\s\q\a\8\9\d\y\o\c\4\1\f\3\q\z\5\2\i\i\j\8\d\6\2\x\6\i\e\3\8\6\v\t\l\1\a\7\a\j\y\d\k\x\n\i\w\a\c\2\y\o\7\7\3\4\d\s\l\9\5\k\k\s\i\j\y\w\7\2\a\p\3\0\v\5\6\4\j\n\0\q\2\b\n\3\d\6\q\j\n\h\j\j\q\v\j\7\j\i\m\1\s\c\6\s\z\q\k\h\n\w\t\p\b\0\z\a\o\4\w\j\z\q\h\f\o\u\e\c\f\x\z\0\t\4\4\s\u\f\3\y\a\h\e\t\f\a\r\7\m\x\v\b\f\w\s\z\4\k\c\0\5\9\4\b\u\4\g\j\a\5\s\9\5\0\d\7\6\q\n\i\d\5\r\2\n\v\h\2\4\q\2\p\8\f\m\t\a\m\u\q\g\d\g\c\h\a\q\p\m\7\u\x\2\5\k\y\p\7\p\h\o\o\p\w\t\z\q\o\j\m\9\y\5\o\q\5\7\k\p\p\1\c\8\3\1\3\8\h\l\v\3\8\3\u\j\i\i\d\x\2\e\4\5\j\6\8\e\x\4\g\v\d\v\9\m\s\1\p\3\r\2\8\m\5\7\1\a\w\i\f\i\q\s\6\1\q\x\x\g\8\5\i\e\o\8\r\y\h\e\v\q\l\f\b\4\i\w\d\7\5\z\i\d\5\k\f\i\n\m\w\d\5\g\n\u\m\b\3\b\m\j\u\c\x\n\r\g\9\3\a\q\n\u\5\v\a\b\9\g\0\l\y\h\p\k\8\9\l\j\d\z\k\7\d\e\2\h\t\i\g\o\e\l\d\n\o\v\r\b\3\8\f\d\3\s\h\o\k\1\i\l\w\c\5\8\n\z\d\g\j\y\l\n\1\2\3\j\f\2\y\1\d\9\m\7\k\l\0\j\6\d\j\w\d\q\9\g\u\u\3\t\o\3\m\p\3\a\9\4\s\o\x\9\2\2\q\9\5\u\w\t\z\m\y\h\n\h\4\l\v\r\1\k\u\9\v\x\i\k\i\f\w\v\s\h\n\h\z\n\q\6\c\s\z\p\j\j\y\g\g\f\c\l\6\8\t\t\o\6\2\i\y\l\2\h\y\m\x\5\7\p\x\u\k\u\6\t\k\a\t\6\1\v\6\e\a\r\v\1\p\l\l\x\j\s\r\w\j\9\c\3\f\7\s\g\q\6\m\8\r\z\r\j\1\n\m\s\4\f\7\d\2\q\i\y\b\c\5\g\u\w\r\y\y\t\x\n\u\t\l\i\e\5\f\2\x\5\8\d\1\h\e\x\k\6\b\o\i\e\m\r\1\7\e\9\7\o\k\5\d\k\s\e\i\1\l\i\6\t\1\2\u\6\w\c\8\6\p\6\d\7\x\m\2\y\e\o\5\g\g\9\q\1\n\e\9\e\1\r\1\3\i\f\7\b\7\3\z\r\t\f\i\2\u\5\h\5\g\j\n\x\d\c\w\x\s\n\1\g\m\7\5\q\4\d\9\z\l\i\c\a\z\p\g\z\9\r\9\m\s\y\2\e\h\w\h\a\0\3\x\9\h\m\l\i\x\r\t\v\t\a\q\n\l\2\5\b\0\2\5\u\y\3\4\7\u\h\f\x\j\3\5\s\n\4\i\e\l\i\n\7\c\n\s\1\q\f\6\b\l\t\s\t\e\9\s\2\0\h\t\6\5\m\1\8\b\7\y\a\a\h\2\w\z\s\v\u\v\m\u\a\4\a\o\9\b\8\a\n\4\v\a\b\i\c\5\u\6\7\1\h\t\g\l\q\4\h\t\u\5\y\6\2\4\s\8\r\b\c\o\j\2\e\4\b\c\n\s\v\h\u\l\5\a\e\1\e\l\e\o\3\z\9\e\i\u\2\4\y\j\y\6\a\q\h\j\w\k\s\3\8\c\z\0\t\8\n\1\c\4\v\j\o\9\g\8\5\d\p\w\3\x\l\x\m\c\8\j\v\y\e\y\i\0\j\s\q\e\w\c\u\t\h\p\h\1\1\z\x\q\y\n\d\b\i\3\n\7\s\s\3\7\3\7\8\f\q\q\e\6\s\e\x\k\n\d\g\c\6\z\3\g\o\7\m\d\7\s\0\b\r\3\q\7\y\y\f\t\v\t\c\v\t\r\9\z\a\0\d\8\b\m\n\m\5\5\h\0\q\i\4\y\8\9\l\q\c\o\b\b\a\h\u\p\s\9\n\b\8\8\i\d\5\l\k\d\m\7\v\s\8\w\b\v\a\l\k\3\w\c\k\6\w\s\8\k\p\d\q\y\n\9\2\w\k\u\e\u\x\6\o\y\d\t\7\d\t\u\k\s\e\3\6\6\m\0\x\d\i\m\y\a\1\2\c\6\k\h\a\a\s\r\8\3\i\i\k\3\f\9\y\1\s\9\p\p\z\s\o\s\a\i\8\2\5\y\7\i\0\k\2\2\p\g\a\z\8\n\7\c\6\4\w\u\t\j\5\3\0\w\t\9\9\i\k\8\k\q\a\i\c\s\0\9\y\0\u\w\e\5\l\n\r\f\a\g\q\w\k\l\b\7\k\w\b\0\y\q\z\s\y\7\o\n\9\m\e\2\u\r\3\i\3\l\c\d\e\h\2\k\f\d\g\a\4\f\j\3\j\j\h\z\q\a\4\l\m\l\l\0\1\j\e\i\4\i\e\n\x\1\o\l\p\c\h\c\4\f\r\u\l\8\m\u\o\5\m\8\e\8\6\g\7\4\x\q\y\0\8\7\i\h\o\3\q\u\6\s\i\h\b\k\n\0\g\7\u\s\p\8\u\h\h\6\3\v\4\q\a\0\p\4\v\e\y\0\l\4\b\x\v\3\1\3\6\9\x\7\9\e\w\k\i\x\0\4\6\6\t\m\s\g\t\v\0\9\i\o\m\v\n\o\2\r\2\x\s\g\e\s\a\d\g\j\2\s\j\y\j\r\h\9\o\7\v\z\8\l\o\l\s\m\o\a\h\7\9\8\t\o\d\1\w\y\3\q\5\3\j\g\c\c\1\s\5\q\p\t\i\k\i\s\a\e\3\o\j\e\3\h\8\a\u\g\r\e\y\w\m\n\0\2\k\6\5\v\d\o\k\d\9\b\j\q\9\x\v\w\x\z\a\i\4\r\0\a\7\6\z\v\b\q\w\r\7\1\b\f\w\z\j\n\u\j\y\f\q\o\b\u\l\b\g\n\i\5\x\x\s\w\z\1\3\2\q\f\2\p\i\q\b\d\e\r\7\d\9\1\p\y\b\v\w\6\u\3\1\5\9\p\t\d\0\c\k\8\2\w\t\f\p\3\p\p\g\q\p\v\h\m\s\p\n\a\w\t\x\z\q\a\d\6\w\0\x\z\a\6\r\x\7\q\t\c\b\1\0\s\h\x\o\f\h\3\g\z\0\f\3\c\f\s\j\8\q\q\f\b\w\y\k\2\a\0\o\y\h\z\t\l\g\q\f\v\y\y\0\3\x\f\y\8\m\k\d\w\s\s\0\2\i\3\w\8\m\d\c\x\x\m\1\z\d\w\e\p\t\w\8\l\n\w\9\s\u\o\r\s\i\o\y\5\w\3\q\l\7\m\g\e\8\q\a\7\p\s\s\4\z\n\e\q\3\c\3\w\5\s\0\p\u\m\h\2\q\p\m\b\b\4\8\q\k\w\b\3\4\3\y\s\9\z\c\n\z\h\2\i\z\f\v\n\o\u\a\2\2\c\s\3\h\l\9\7\a\a\k\f\m\0\5\l\z\s\6\f\q\5\5\5\6\o\t\5\3\h\r\m\y\j\h\m\x\e\0\s\z\g\u\v\1\c\i\8\n\8\g\b\n\0\p\d\m\y\u\6\w\4\g\a\o\n\z\u\7\f\9\w\1\q\d\c\u\5\3\o\w\8\z\7\y\b\o\q\7\e\o\7\v\5\g\a\k\z\b\1\n\c\3\f\3\w\r\1\p\8\q\e\z\b\s\9\z\g\b\0\j\a\a\v\0\s\a\8\d\u\b\1\u\u\w\h\4\f\u\x\5\7\v\n\8\v\q\t\p\v\q\9\r\c\h\3\i\x\c\h\g\8\k\o\2\c\c\6\r\c\5\r\d\k\j\m\k\u\1\u\b\i\g\5\3\y\f\m\7\o\y\5\v\5\o\7\p\i\h\o\d\a\p\6\e\r\g\c\m\2\2\9\1\6\h\9\h\v\h\o\y\k\6\9\t\d\0\l\k\5\z\6\2\1\f\n\5\l\j\1\9\c\h\v\5\k\4\b\a\2\4\4\q\y\5\h\k\9\a\z\z\n\3\r\7\p\o\j\a\k\k\b\5\2\w\u\5\g\f\j\u\5\z\q\1\g\q\y\3\t\c\d\7\m\x\2\6\u\c\5\7\o\p\y\i\t\2\e\3\3\7\6\m\0\u\y\u\n\e\2\a\9\8\1\7\4\r\f\5\j\x\6\p\d\7\l\y\l\s\7\q\e\x\w\5\9\e\f\m\7\l\8\j\q\u\n\r\x\x\l\y\s\8\v\g\6\s\x\t\9\4\r\f\a\7\5\s\z\e\b\o\x\w\d\7\5\y\g\m\q\4\n\i\w\1\q\c\w\v\j\8\5\p\x\7\e\8\l\y\k\e\3\b\4\1\h\6\1\l\s\7\k\x\i\p\q\8\u\0\m\m\j\8\s\h\s\s\l\j\3\4\q\q\o\h\l\f\m\8\3\0\h\p\u\7\u\k\5\e\4\2\4\g\4\o\i\v\1\x\9\v\d\a\i\b\k\q\v\2\e\5\n\u\l\u\0\c\5\8\d\j\b\4\x\7\8\4\s\9\u\u\d\x\u\f\2\a\o\0\m\d\7\z\z\0\2\y\r\h\9\u\g\j\d\x\3\b\v\v\6\l\w\a\g\v\9\7\7\a\c\8\1\h\s\5\h\h\j\k\b\i\5\5\a\w\z\d\8\6\o\7\2\1\j\o\h\l\i\r\x\p\6\5\o\v\d\q\8\o\0\g\y\s\g\i\u\8\u\2\y\3\4\u\p\y\p\7\v\3\0\t\6\w\p\r\e\i\y\l\p\o\t\0\y\v\u\a\6\8\k\4\o\v\c\u\g\m\p\8\p\4\x\o\7\o\1\e\c\e\r\h\8\z\9\0\k\1\0\p\m\2\r\5\1\i\z\7\5\r ]] 00:07:34.095 00:07:34.095 real 0m3.139s 00:07:34.095 user 0m2.611s 00:07:34.095 sys 0m1.656s 00:07:34.095 23:51:40 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:34.095 23:51:40 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:34.095 ************************************ 00:07:34.095 END TEST dd_rw_offset 00:07:34.095 ************************************ 00:07:34.095 23:51:40 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:07:34.095 23:51:40 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:07:34.095 23:51:40 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:34.095 23:51:40 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:34.095 23:51:40 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:07:34.095 23:51:40 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:34.095 23:51:40 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:07:34.095 23:51:40 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:34.095 23:51:40 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:07:34.095 23:51:40 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:34.095 23:51:40 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:34.095 { 00:07:34.095 "subsystems": [ 00:07:34.095 { 00:07:34.095 "subsystem": "bdev", 00:07:34.095 "config": [ 00:07:34.095 { 00:07:34.095 "params": { 00:07:34.095 "trtype": "pcie", 00:07:34.095 "traddr": "0000:00:10.0", 00:07:34.095 "name": "Nvme0" 00:07:34.095 }, 00:07:34.095 "method": "bdev_nvme_attach_controller" 00:07:34.095 }, 00:07:34.095 { 00:07:34.095 "method": "bdev_wait_for_examine" 00:07:34.095 } 00:07:34.095 ] 00:07:34.095 } 00:07:34.095 ] 00:07:34.095 } 00:07:34.095 [2024-11-18 23:51:40.629754] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:07:34.095 [2024-11-18 23:51:40.629928] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61504 ] 00:07:34.354 [2024-11-18 23:51:40.806614] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.354 [2024-11-18 23:51:40.888773] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.612 [2024-11-18 23:51:41.055901] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:34.612  [2024-11-18T23:51:42.260Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:35.568 00:07:35.568 23:51:42 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:35.568 00:07:35.568 real 0m36.399s 00:07:35.568 user 0m30.074s 00:07:35.568 sys 0m16.751s 00:07:35.568 23:51:42 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:35.568 ************************************ 00:07:35.568 END TEST spdk_dd_basic_rw 00:07:35.568 ************************************ 00:07:35.568 23:51:42 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:35.827 23:51:42 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:07:35.827 23:51:42 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:35.827 23:51:42 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:35.827 23:51:42 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:35.827 ************************************ 00:07:35.827 START TEST spdk_dd_posix 00:07:35.827 ************************************ 00:07:35.827 23:51:42 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:07:35.827 * Looking for test storage... 00:07:35.827 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:35.827 23:51:42 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:35.827 23:51:42 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:35.827 23:51:42 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1693 -- # lcov --version 00:07:35.827 23:51:42 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:35.827 23:51:42 spdk_dd.spdk_dd_posix -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:35.827 23:51:42 spdk_dd.spdk_dd_posix -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:35.827 23:51:42 spdk_dd.spdk_dd_posix -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:35.827 23:51:42 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # IFS=.-: 00:07:35.827 23:51:42 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # read -ra ver1 00:07:35.827 23:51:42 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # IFS=.-: 00:07:35.827 23:51:42 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # read -ra ver2 00:07:35.827 23:51:42 spdk_dd.spdk_dd_posix -- scripts/common.sh@338 -- # local 'op=<' 00:07:35.827 23:51:42 spdk_dd.spdk_dd_posix -- scripts/common.sh@340 -- # ver1_l=2 00:07:35.827 23:51:42 spdk_dd.spdk_dd_posix -- scripts/common.sh@341 -- # ver2_l=1 00:07:35.827 23:51:42 spdk_dd.spdk_dd_posix -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:35.827 23:51:42 spdk_dd.spdk_dd_posix -- scripts/common.sh@344 -- # case "$op" in 00:07:35.827 23:51:42 spdk_dd.spdk_dd_posix -- scripts/common.sh@345 -- # : 1 00:07:35.827 23:51:42 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:35.827 23:51:42 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:35.827 23:51:42 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # decimal 1 00:07:35.827 23:51:42 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=1 00:07:35.827 23:51:42 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:35.827 23:51:42 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 1 00:07:35.827 23:51:42 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # ver1[v]=1 00:07:35.827 23:51:42 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # decimal 2 00:07:35.827 23:51:42 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=2 00:07:35.827 23:51:42 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:35.827 23:51:42 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 2 00:07:35.827 23:51:42 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # ver2[v]=2 00:07:35.827 23:51:42 spdk_dd.spdk_dd_posix -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:35.827 23:51:42 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:35.827 23:51:42 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # return 0 00:07:35.827 23:51:42 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:35.827 23:51:42 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:35.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.827 --rc genhtml_branch_coverage=1 00:07:35.827 --rc genhtml_function_coverage=1 00:07:35.827 --rc genhtml_legend=1 00:07:35.827 --rc geninfo_all_blocks=1 00:07:35.827 --rc geninfo_unexecuted_blocks=1 00:07:35.827 00:07:35.827 ' 00:07:35.828 23:51:42 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:35.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.828 --rc genhtml_branch_coverage=1 00:07:35.828 --rc genhtml_function_coverage=1 00:07:35.828 --rc genhtml_legend=1 00:07:35.828 --rc geninfo_all_blocks=1 00:07:35.828 --rc geninfo_unexecuted_blocks=1 00:07:35.828 00:07:35.828 ' 00:07:35.828 23:51:42 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:35.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.828 --rc genhtml_branch_coverage=1 00:07:35.828 --rc genhtml_function_coverage=1 00:07:35.828 --rc genhtml_legend=1 00:07:35.828 --rc geninfo_all_blocks=1 00:07:35.828 --rc geninfo_unexecuted_blocks=1 00:07:35.828 00:07:35.828 ' 00:07:35.828 23:51:42 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:35.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.828 --rc genhtml_branch_coverage=1 00:07:35.828 --rc genhtml_function_coverage=1 00:07:35.828 --rc genhtml_legend=1 00:07:35.828 --rc geninfo_all_blocks=1 00:07:35.828 --rc geninfo_unexecuted_blocks=1 00:07:35.828 00:07:35.828 ' 00:07:35.828 23:51:42 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:35.828 23:51:42 spdk_dd.spdk_dd_posix -- scripts/common.sh@15 -- # shopt -s extglob 00:07:35.828 23:51:42 spdk_dd.spdk_dd_posix -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:35.828 23:51:42 spdk_dd.spdk_dd_posix -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:35.828 23:51:42 spdk_dd.spdk_dd_posix -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:35.828 23:51:42 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.828 23:51:42 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.828 23:51:42 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.828 23:51:42 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:07:35.828 23:51:42 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.828 23:51:42 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:07:35.828 23:51:42 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:07:35.828 23:51:42 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:07:35.828 23:51:42 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:07:35.828 23:51:42 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:35.828 23:51:42 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:35.828 23:51:42 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:07:35.828 23:51:42 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:07:35.828 * First test run, liburing in use 00:07:35.828 23:51:42 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:07:35.828 23:51:42 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:35.828 23:51:42 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:35.828 23:51:42 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:35.828 ************************************ 00:07:35.828 START TEST dd_flag_append 00:07:35.828 ************************************ 00:07:35.828 23:51:42 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1129 -- # append 00:07:35.828 23:51:42 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:07:35.828 23:51:42 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:07:35.828 23:51:42 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:07:35.828 23:51:42 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:07:35.828 23:51:42 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:07:35.828 23:51:42 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=cjoft529k44gpbcvlxaf9wya898glzqa 00:07:35.828 23:51:42 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:07:35.828 23:51:42 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:07:35.828 23:51:42 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:07:35.828 23:51:42 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=3jc9qje7uh3hjb6uco87vnafcol19slc 00:07:35.828 23:51:42 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s cjoft529k44gpbcvlxaf9wya898glzqa 00:07:35.828 23:51:42 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s 3jc9qje7uh3hjb6uco87vnafcol19slc 00:07:35.828 23:51:42 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:07:36.087 [2024-11-18 23:51:42.589509] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:07:36.087 [2024-11-18 23:51:42.589709] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61588 ] 00:07:36.087 [2024-11-18 23:51:42.767141] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.346 [2024-11-18 23:51:42.853303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.346 [2024-11-18 23:51:43.007707] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:36.604  [2024-11-18T23:51:44.232Z] Copying: 32/32 [B] (average 31 kBps) 00:07:37.540 00:07:37.540 23:51:43 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ 3jc9qje7uh3hjb6uco87vnafcol19slccjoft529k44gpbcvlxaf9wya898glzqa == \3\j\c\9\q\j\e\7\u\h\3\h\j\b\6\u\c\o\8\7\v\n\a\f\c\o\l\1\9\s\l\c\c\j\o\f\t\5\2\9\k\4\4\g\p\b\c\v\l\x\a\f\9\w\y\a\8\9\8\g\l\z\q\a ]] 00:07:37.540 00:07:37.540 real 0m1.463s 00:07:37.540 user 0m1.145s 00:07:37.540 sys 0m0.799s 00:07:37.540 23:51:43 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:37.540 23:51:43 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:07:37.540 ************************************ 00:07:37.540 END TEST dd_flag_append 00:07:37.540 ************************************ 00:07:37.540 23:51:43 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:07:37.540 23:51:43 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:37.540 23:51:43 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:37.540 23:51:43 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:37.540 ************************************ 00:07:37.540 START TEST dd_flag_directory 00:07:37.540 ************************************ 00:07:37.540 23:51:43 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1129 -- # directory 00:07:37.540 23:51:43 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:37.540 23:51:43 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # local es=0 00:07:37.540 23:51:43 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:37.540 23:51:43 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:37.540 23:51:43 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:37.540 23:51:43 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:37.540 23:51:43 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:37.540 23:51:43 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:37.541 23:51:43 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:37.541 23:51:43 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:37.541 23:51:43 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:37.541 23:51:43 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:37.541 [2024-11-18 23:51:44.104846] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:07:37.541 [2024-11-18 23:51:44.105044] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61623 ] 00:07:37.799 [2024-11-18 23:51:44.295356] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.799 [2024-11-18 23:51:44.422723] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.058 [2024-11-18 23:51:44.597988] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:38.058 [2024-11-18 23:51:44.681147] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:38.058 [2024-11-18 23:51:44.681297] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:38.058 [2024-11-18 23:51:44.681321] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:38.994 [2024-11-18 23:51:45.351502] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:38.994 23:51:45 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # es=236 00:07:38.994 23:51:45 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:38.994 23:51:45 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@664 -- # es=108 00:07:38.994 23:51:45 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@665 -- # case "$es" in 00:07:38.994 23:51:45 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@672 -- # es=1 00:07:38.994 23:51:45 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:38.994 23:51:45 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:38.994 23:51:45 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # local es=0 00:07:38.994 23:51:45 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:38.994 23:51:45 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:38.994 23:51:45 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:38.994 23:51:45 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:38.994 23:51:45 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:38.994 23:51:45 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:38.994 23:51:45 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:38.994 23:51:45 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:38.994 23:51:45 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:38.994 23:51:45 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:39.253 [2024-11-18 23:51:45.751304] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:07:39.253 [2024-11-18 23:51:45.751479] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61650 ] 00:07:39.253 [2024-11-18 23:51:45.933388] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.512 [2024-11-18 23:51:46.045015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.771 [2024-11-18 23:51:46.243777] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:39.771 [2024-11-18 23:51:46.350307] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:39.771 [2024-11-18 23:51:46.350386] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:39.771 [2024-11-18 23:51:46.350421] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:40.708 [2024-11-18 23:51:47.102322] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:40.708 23:51:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # es=236 00:07:40.708 23:51:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:40.708 23:51:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@664 -- # es=108 00:07:40.708 23:51:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@665 -- # case "$es" in 00:07:40.708 23:51:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@672 -- # es=1 00:07:40.708 23:51:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:40.708 00:07:40.708 real 0m3.405s 00:07:40.708 user 0m2.764s 00:07:40.708 sys 0m0.419s 00:07:40.708 23:51:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:40.708 ************************************ 00:07:40.708 END TEST dd_flag_directory 00:07:40.708 23:51:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:07:40.708 ************************************ 00:07:40.967 23:51:47 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:07:40.967 23:51:47 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:40.967 23:51:47 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:40.967 23:51:47 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:40.967 ************************************ 00:07:40.967 START TEST dd_flag_nofollow 00:07:40.967 ************************************ 00:07:40.967 23:51:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1129 -- # nofollow 00:07:40.967 23:51:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:40.967 23:51:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:40.967 23:51:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:40.967 23:51:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:40.967 23:51:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:40.967 23:51:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # local es=0 00:07:40.967 23:51:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:40.967 23:51:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:40.967 23:51:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:40.967 23:51:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:40.967 23:51:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:40.967 23:51:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:40.967 23:51:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:40.967 23:51:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:40.967 23:51:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:40.967 23:51:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:40.967 [2024-11-18 23:51:47.568953] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:07:40.967 [2024-11-18 23:51:47.569129] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61696 ] 00:07:41.226 [2024-11-18 23:51:47.751139] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.226 [2024-11-18 23:51:47.864541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.486 [2024-11-18 23:51:48.060622] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:41.486 [2024-11-18 23:51:48.169464] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:41.486 [2024-11-18 23:51:48.169554] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:41.486 [2024-11-18 23:51:48.169580] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:42.423 [2024-11-18 23:51:48.923427] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:42.680 23:51:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # es=216 00:07:42.680 23:51:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:42.680 23:51:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@664 -- # es=88 00:07:42.680 23:51:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@665 -- # case "$es" in 00:07:42.680 23:51:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@672 -- # es=1 00:07:42.680 23:51:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:42.680 23:51:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:42.680 23:51:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # local es=0 00:07:42.680 23:51:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:42.680 23:51:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:42.680 23:51:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:42.680 23:51:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:42.680 23:51:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:42.680 23:51:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:42.680 23:51:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:42.680 23:51:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:42.680 23:51:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:42.680 23:51:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:42.680 [2024-11-18 23:51:49.323840] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:07:42.680 [2024-11-18 23:51:49.324017] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61718 ] 00:07:42.938 [2024-11-18 23:51:49.506689] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.938 [2024-11-18 23:51:49.617718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.197 [2024-11-18 23:51:49.810006] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:43.456 [2024-11-18 23:51:49.914361] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:43.456 [2024-11-18 23:51:49.914497] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:43.456 [2024-11-18 23:51:49.914539] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:44.024 [2024-11-18 23:51:50.670337] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:44.282 23:51:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # es=216 00:07:44.282 23:51:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:44.282 23:51:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@664 -- # es=88 00:07:44.282 23:51:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@665 -- # case "$es" in 00:07:44.282 23:51:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@672 -- # es=1 00:07:44.282 23:51:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:44.282 23:51:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:07:44.282 23:51:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:07:44.282 23:51:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:07:44.282 23:51:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:44.541 [2024-11-18 23:51:51.064772] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:07:44.541 [2024-11-18 23:51:51.064947] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61737 ] 00:07:44.800 [2024-11-18 23:51:51.246695] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.800 [2024-11-18 23:51:51.354716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.060 [2024-11-18 23:51:51.547152] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:45.060  [2024-11-18T23:51:52.689Z] Copying: 512/512 [B] (average 500 kBps) 00:07:45.997 00:07:45.997 23:51:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ f3sbubjz94p9n3suns5mnxt4fh4s6kkv5w20ef48p9lihil53pzeubk2ox3vmeo38b33ikfojdgzbe5zp3lvylm22dzb4jiho23eugx2lw0vkor8kcglvlad5rla25pl6h3ko7p0e7o9r090h2zwt1uzpnz0tack5hyp7i4x9pqbnnb0zjyctpqo54rt1uudlw2kqhypyj9h411bkqyyzdhhy7gtmn7bd37zf7qcoa0gzph92x2cqnkwiel4p4retumna2h7dtrxnmgumnll53i38aa26h3ufr02htux3eaexepyu819vwzdbgsyiv5e4zh7i68lrjpjjz5oy3ffwgg0njki3mfwvmf0sv5icwjb89rz7g4ycahb9r4ybm46dsgyj1ccmontleiiw9e1menhp5xnpnaw1uhfljxt3omvmqzxtrz9mehx8n2907uyo7jde1pi4urm7m0fx1rvkihc3yfmdnpsdhunyqqf7jsqskueg1c7i7xb5iu498ce == \f\3\s\b\u\b\j\z\9\4\p\9\n\3\s\u\n\s\5\m\n\x\t\4\f\h\4\s\6\k\k\v\5\w\2\0\e\f\4\8\p\9\l\i\h\i\l\5\3\p\z\e\u\b\k\2\o\x\3\v\m\e\o\3\8\b\3\3\i\k\f\o\j\d\g\z\b\e\5\z\p\3\l\v\y\l\m\2\2\d\z\b\4\j\i\h\o\2\3\e\u\g\x\2\l\w\0\v\k\o\r\8\k\c\g\l\v\l\a\d\5\r\l\a\2\5\p\l\6\h\3\k\o\7\p\0\e\7\o\9\r\0\9\0\h\2\z\w\t\1\u\z\p\n\z\0\t\a\c\k\5\h\y\p\7\i\4\x\9\p\q\b\n\n\b\0\z\j\y\c\t\p\q\o\5\4\r\t\1\u\u\d\l\w\2\k\q\h\y\p\y\j\9\h\4\1\1\b\k\q\y\y\z\d\h\h\y\7\g\t\m\n\7\b\d\3\7\z\f\7\q\c\o\a\0\g\z\p\h\9\2\x\2\c\q\n\k\w\i\e\l\4\p\4\r\e\t\u\m\n\a\2\h\7\d\t\r\x\n\m\g\u\m\n\l\l\5\3\i\3\8\a\a\2\6\h\3\u\f\r\0\2\h\t\u\x\3\e\a\e\x\e\p\y\u\8\1\9\v\w\z\d\b\g\s\y\i\v\5\e\4\z\h\7\i\6\8\l\r\j\p\j\j\z\5\o\y\3\f\f\w\g\g\0\n\j\k\i\3\m\f\w\v\m\f\0\s\v\5\i\c\w\j\b\8\9\r\z\7\g\4\y\c\a\h\b\9\r\4\y\b\m\4\6\d\s\g\y\j\1\c\c\m\o\n\t\l\e\i\i\w\9\e\1\m\e\n\h\p\5\x\n\p\n\a\w\1\u\h\f\l\j\x\t\3\o\m\v\m\q\z\x\t\r\z\9\m\e\h\x\8\n\2\9\0\7\u\y\o\7\j\d\e\1\p\i\4\u\r\m\7\m\0\f\x\1\r\v\k\i\h\c\3\y\f\m\d\n\p\s\d\h\u\n\y\q\q\f\7\j\s\q\s\k\u\e\g\1\c\7\i\7\x\b\5\i\u\4\9\8\c\e ]] 00:07:45.997 00:07:45.997 real 0m5.233s 00:07:45.997 user 0m4.292s 00:07:45.997 sys 0m1.353s 00:07:45.997 23:51:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:45.997 23:51:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:07:45.997 ************************************ 00:07:45.997 END TEST dd_flag_nofollow 00:07:45.997 ************************************ 00:07:46.256 23:51:52 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:07:46.257 23:51:52 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:46.257 23:51:52 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:46.257 23:51:52 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:46.257 ************************************ 00:07:46.257 START TEST dd_flag_noatime 00:07:46.257 ************************************ 00:07:46.257 23:51:52 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1129 -- # noatime 00:07:46.257 23:51:52 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:07:46.257 23:51:52 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:07:46.257 23:51:52 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:07:46.257 23:51:52 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:07:46.257 23:51:52 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:07:46.257 23:51:52 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:46.257 23:51:52 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1731973911 00:07:46.257 23:51:52 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:46.257 23:51:52 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1731973912 00:07:46.257 23:51:52 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:07:47.194 23:51:53 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:47.194 [2024-11-18 23:51:53.860807] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:07:47.194 [2024-11-18 23:51:53.860993] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61797 ] 00:07:47.453 [2024-11-18 23:51:54.040470] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.713 [2024-11-18 23:51:54.154029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.713 [2024-11-18 23:51:54.345192] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:47.973  [2024-11-18T23:51:55.604Z] Copying: 512/512 [B] (average 500 kBps) 00:07:48.912 00:07:48.912 23:51:55 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:48.912 23:51:55 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1731973911 )) 00:07:48.912 23:51:55 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:48.912 23:51:55 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1731973912 )) 00:07:48.912 23:51:55 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:48.912 [2024-11-18 23:51:55.582698] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:07:48.912 [2024-11-18 23:51:55.582896] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61823 ] 00:07:49.171 [2024-11-18 23:51:55.758401] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.430 [2024-11-18 23:51:55.871071] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.430 [2024-11-18 23:51:56.062356] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:49.689  [2024-11-18T23:51:57.317Z] Copying: 512/512 [B] (average 500 kBps) 00:07:50.625 00:07:50.625 23:51:57 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:50.625 23:51:57 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1731973916 )) 00:07:50.625 00:07:50.625 real 0m4.385s 00:07:50.625 user 0m2.750s 00:07:50.625 sys 0m1.874s 00:07:50.625 23:51:57 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:50.625 23:51:57 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:07:50.625 ************************************ 00:07:50.625 END TEST dd_flag_noatime 00:07:50.625 ************************************ 00:07:50.625 23:51:57 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:07:50.625 23:51:57 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:50.625 23:51:57 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:50.625 23:51:57 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:50.625 ************************************ 00:07:50.625 START TEST dd_flags_misc 00:07:50.625 ************************************ 00:07:50.625 23:51:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1129 -- # io 00:07:50.625 23:51:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:07:50.625 23:51:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:07:50.625 23:51:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:07:50.625 23:51:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:50.625 23:51:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:07:50.625 23:51:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:07:50.626 23:51:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:07:50.626 23:51:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:50.626 23:51:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:50.626 [2024-11-18 23:51:57.279586] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:07:50.626 [2024-11-18 23:51:57.279789] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61863 ] 00:07:50.884 [2024-11-18 23:51:57.461694] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.884 [2024-11-18 23:51:57.549083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.142 [2024-11-18 23:51:57.704139] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:51.142  [2024-11-18T23:51:58.769Z] Copying: 512/512 [B] (average 500 kBps) 00:07:52.077 00:07:52.077 23:51:58 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ gaccg2p6h9m0fk3gttpvfs700dmwgbeuj6z2xudhlphswrro9irkagm379phjs650r2ugjaqfaeh58y8nxsieint4jxjde3mwspxqlmhf0wi71x5wqtp9cqjjlpqn0ikkcp62hi8oy5dsrfbw5rrr5sgbqutfznonuke0lx5q1pq7ts3o9sgi29fuzskv7efcz9kwowimyzujtrdkn5tw0e9brrxnz6k9ofyihcdq2dyz8kpp3qg26qf2jhqtymrknpxr5u4nevsticyw20fhdgzp6aihdt36m1pd2mmxq7jvnsdky27e8gnuokak9xd2jja5z54pwkj5xsmumne4ogb4xq4g4s2j0k0g2492xwgqsjb4200nibq05a7lev9to2feqrfqvuqwm69yzkofp5an320z9x4cjpnvosgbz299x65bmr3833sfkbdlcx3awjf33jl59k37szphnhedha914nl7n8m613vcbnugwdeo6q17pwje5hx2lp1smht == \g\a\c\c\g\2\p\6\h\9\m\0\f\k\3\g\t\t\p\v\f\s\7\0\0\d\m\w\g\b\e\u\j\6\z\2\x\u\d\h\l\p\h\s\w\r\r\o\9\i\r\k\a\g\m\3\7\9\p\h\j\s\6\5\0\r\2\u\g\j\a\q\f\a\e\h\5\8\y\8\n\x\s\i\e\i\n\t\4\j\x\j\d\e\3\m\w\s\p\x\q\l\m\h\f\0\w\i\7\1\x\5\w\q\t\p\9\c\q\j\j\l\p\q\n\0\i\k\k\c\p\6\2\h\i\8\o\y\5\d\s\r\f\b\w\5\r\r\r\5\s\g\b\q\u\t\f\z\n\o\n\u\k\e\0\l\x\5\q\1\p\q\7\t\s\3\o\9\s\g\i\2\9\f\u\z\s\k\v\7\e\f\c\z\9\k\w\o\w\i\m\y\z\u\j\t\r\d\k\n\5\t\w\0\e\9\b\r\r\x\n\z\6\k\9\o\f\y\i\h\c\d\q\2\d\y\z\8\k\p\p\3\q\g\2\6\q\f\2\j\h\q\t\y\m\r\k\n\p\x\r\5\u\4\n\e\v\s\t\i\c\y\w\2\0\f\h\d\g\z\p\6\a\i\h\d\t\3\6\m\1\p\d\2\m\m\x\q\7\j\v\n\s\d\k\y\2\7\e\8\g\n\u\o\k\a\k\9\x\d\2\j\j\a\5\z\5\4\p\w\k\j\5\x\s\m\u\m\n\e\4\o\g\b\4\x\q\4\g\4\s\2\j\0\k\0\g\2\4\9\2\x\w\g\q\s\j\b\4\2\0\0\n\i\b\q\0\5\a\7\l\e\v\9\t\o\2\f\e\q\r\f\q\v\u\q\w\m\6\9\y\z\k\o\f\p\5\a\n\3\2\0\z\9\x\4\c\j\p\n\v\o\s\g\b\z\2\9\9\x\6\5\b\m\r\3\8\3\3\s\f\k\b\d\l\c\x\3\a\w\j\f\3\3\j\l\5\9\k\3\7\s\z\p\h\n\h\e\d\h\a\9\1\4\n\l\7\n\8\m\6\1\3\v\c\b\n\u\g\w\d\e\o\6\q\1\7\p\w\j\e\5\h\x\2\l\p\1\s\m\h\t ]] 00:07:52.077 23:51:58 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:52.077 23:51:58 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:52.077 [2024-11-18 23:51:58.700318] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:07:52.077 [2024-11-18 23:51:58.700515] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61885 ] 00:07:52.335 [2024-11-18 23:51:58.859351] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.336 [2024-11-18 23:51:58.940163] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.594 [2024-11-18 23:51:59.095989] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:52.594  [2024-11-18T23:52:00.220Z] Copying: 512/512 [B] (average 500 kBps) 00:07:53.528 00:07:53.528 23:51:59 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ gaccg2p6h9m0fk3gttpvfs700dmwgbeuj6z2xudhlphswrro9irkagm379phjs650r2ugjaqfaeh58y8nxsieint4jxjde3mwspxqlmhf0wi71x5wqtp9cqjjlpqn0ikkcp62hi8oy5dsrfbw5rrr5sgbqutfznonuke0lx5q1pq7ts3o9sgi29fuzskv7efcz9kwowimyzujtrdkn5tw0e9brrxnz6k9ofyihcdq2dyz8kpp3qg26qf2jhqtymrknpxr5u4nevsticyw20fhdgzp6aihdt36m1pd2mmxq7jvnsdky27e8gnuokak9xd2jja5z54pwkj5xsmumne4ogb4xq4g4s2j0k0g2492xwgqsjb4200nibq05a7lev9to2feqrfqvuqwm69yzkofp5an320z9x4cjpnvosgbz299x65bmr3833sfkbdlcx3awjf33jl59k37szphnhedha914nl7n8m613vcbnugwdeo6q17pwje5hx2lp1smht == \g\a\c\c\g\2\p\6\h\9\m\0\f\k\3\g\t\t\p\v\f\s\7\0\0\d\m\w\g\b\e\u\j\6\z\2\x\u\d\h\l\p\h\s\w\r\r\o\9\i\r\k\a\g\m\3\7\9\p\h\j\s\6\5\0\r\2\u\g\j\a\q\f\a\e\h\5\8\y\8\n\x\s\i\e\i\n\t\4\j\x\j\d\e\3\m\w\s\p\x\q\l\m\h\f\0\w\i\7\1\x\5\w\q\t\p\9\c\q\j\j\l\p\q\n\0\i\k\k\c\p\6\2\h\i\8\o\y\5\d\s\r\f\b\w\5\r\r\r\5\s\g\b\q\u\t\f\z\n\o\n\u\k\e\0\l\x\5\q\1\p\q\7\t\s\3\o\9\s\g\i\2\9\f\u\z\s\k\v\7\e\f\c\z\9\k\w\o\w\i\m\y\z\u\j\t\r\d\k\n\5\t\w\0\e\9\b\r\r\x\n\z\6\k\9\o\f\y\i\h\c\d\q\2\d\y\z\8\k\p\p\3\q\g\2\6\q\f\2\j\h\q\t\y\m\r\k\n\p\x\r\5\u\4\n\e\v\s\t\i\c\y\w\2\0\f\h\d\g\z\p\6\a\i\h\d\t\3\6\m\1\p\d\2\m\m\x\q\7\j\v\n\s\d\k\y\2\7\e\8\g\n\u\o\k\a\k\9\x\d\2\j\j\a\5\z\5\4\p\w\k\j\5\x\s\m\u\m\n\e\4\o\g\b\4\x\q\4\g\4\s\2\j\0\k\0\g\2\4\9\2\x\w\g\q\s\j\b\4\2\0\0\n\i\b\q\0\5\a\7\l\e\v\9\t\o\2\f\e\q\r\f\q\v\u\q\w\m\6\9\y\z\k\o\f\p\5\a\n\3\2\0\z\9\x\4\c\j\p\n\v\o\s\g\b\z\2\9\9\x\6\5\b\m\r\3\8\3\3\s\f\k\b\d\l\c\x\3\a\w\j\f\3\3\j\l\5\9\k\3\7\s\z\p\h\n\h\e\d\h\a\9\1\4\n\l\7\n\8\m\6\1\3\v\c\b\n\u\g\w\d\e\o\6\q\1\7\p\w\j\e\5\h\x\2\l\p\1\s\m\h\t ]] 00:07:53.528 23:51:59 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:53.528 23:51:59 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:53.528 [2024-11-18 23:52:00.072873] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:07:53.528 [2024-11-18 23:52:00.073048] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61901 ] 00:07:53.786 [2024-11-18 23:52:00.240375] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.786 [2024-11-18 23:52:00.321404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.045 [2024-11-18 23:52:00.478996] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:54.045  [2024-11-18T23:52:01.670Z] Copying: 512/512 [B] (average 166 kBps) 00:07:54.978 00:07:54.979 23:52:01 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ gaccg2p6h9m0fk3gttpvfs700dmwgbeuj6z2xudhlphswrro9irkagm379phjs650r2ugjaqfaeh58y8nxsieint4jxjde3mwspxqlmhf0wi71x5wqtp9cqjjlpqn0ikkcp62hi8oy5dsrfbw5rrr5sgbqutfznonuke0lx5q1pq7ts3o9sgi29fuzskv7efcz9kwowimyzujtrdkn5tw0e9brrxnz6k9ofyihcdq2dyz8kpp3qg26qf2jhqtymrknpxr5u4nevsticyw20fhdgzp6aihdt36m1pd2mmxq7jvnsdky27e8gnuokak9xd2jja5z54pwkj5xsmumne4ogb4xq4g4s2j0k0g2492xwgqsjb4200nibq05a7lev9to2feqrfqvuqwm69yzkofp5an320z9x4cjpnvosgbz299x65bmr3833sfkbdlcx3awjf33jl59k37szphnhedha914nl7n8m613vcbnugwdeo6q17pwje5hx2lp1smht == \g\a\c\c\g\2\p\6\h\9\m\0\f\k\3\g\t\t\p\v\f\s\7\0\0\d\m\w\g\b\e\u\j\6\z\2\x\u\d\h\l\p\h\s\w\r\r\o\9\i\r\k\a\g\m\3\7\9\p\h\j\s\6\5\0\r\2\u\g\j\a\q\f\a\e\h\5\8\y\8\n\x\s\i\e\i\n\t\4\j\x\j\d\e\3\m\w\s\p\x\q\l\m\h\f\0\w\i\7\1\x\5\w\q\t\p\9\c\q\j\j\l\p\q\n\0\i\k\k\c\p\6\2\h\i\8\o\y\5\d\s\r\f\b\w\5\r\r\r\5\s\g\b\q\u\t\f\z\n\o\n\u\k\e\0\l\x\5\q\1\p\q\7\t\s\3\o\9\s\g\i\2\9\f\u\z\s\k\v\7\e\f\c\z\9\k\w\o\w\i\m\y\z\u\j\t\r\d\k\n\5\t\w\0\e\9\b\r\r\x\n\z\6\k\9\o\f\y\i\h\c\d\q\2\d\y\z\8\k\p\p\3\q\g\2\6\q\f\2\j\h\q\t\y\m\r\k\n\p\x\r\5\u\4\n\e\v\s\t\i\c\y\w\2\0\f\h\d\g\z\p\6\a\i\h\d\t\3\6\m\1\p\d\2\m\m\x\q\7\j\v\n\s\d\k\y\2\7\e\8\g\n\u\o\k\a\k\9\x\d\2\j\j\a\5\z\5\4\p\w\k\j\5\x\s\m\u\m\n\e\4\o\g\b\4\x\q\4\g\4\s\2\j\0\k\0\g\2\4\9\2\x\w\g\q\s\j\b\4\2\0\0\n\i\b\q\0\5\a\7\l\e\v\9\t\o\2\f\e\q\r\f\q\v\u\q\w\m\6\9\y\z\k\o\f\p\5\a\n\3\2\0\z\9\x\4\c\j\p\n\v\o\s\g\b\z\2\9\9\x\6\5\b\m\r\3\8\3\3\s\f\k\b\d\l\c\x\3\a\w\j\f\3\3\j\l\5\9\k\3\7\s\z\p\h\n\h\e\d\h\a\9\1\4\n\l\7\n\8\m\6\1\3\v\c\b\n\u\g\w\d\e\o\6\q\1\7\p\w\j\e\5\h\x\2\l\p\1\s\m\h\t ]] 00:07:54.979 23:52:01 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:54.979 23:52:01 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:54.979 [2024-11-18 23:52:01.506513] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:07:54.979 [2024-11-18 23:52:01.506712] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61922 ] 00:07:55.237 [2024-11-18 23:52:01.685059] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.237 [2024-11-18 23:52:01.769751] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.237 [2024-11-18 23:52:01.912852] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:55.495  [2024-11-18T23:52:03.119Z] Copying: 512/512 [B] (average 125 kBps) 00:07:56.427 00:07:56.427 23:52:02 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ gaccg2p6h9m0fk3gttpvfs700dmwgbeuj6z2xudhlphswrro9irkagm379phjs650r2ugjaqfaeh58y8nxsieint4jxjde3mwspxqlmhf0wi71x5wqtp9cqjjlpqn0ikkcp62hi8oy5dsrfbw5rrr5sgbqutfznonuke0lx5q1pq7ts3o9sgi29fuzskv7efcz9kwowimyzujtrdkn5tw0e9brrxnz6k9ofyihcdq2dyz8kpp3qg26qf2jhqtymrknpxr5u4nevsticyw20fhdgzp6aihdt36m1pd2mmxq7jvnsdky27e8gnuokak9xd2jja5z54pwkj5xsmumne4ogb4xq4g4s2j0k0g2492xwgqsjb4200nibq05a7lev9to2feqrfqvuqwm69yzkofp5an320z9x4cjpnvosgbz299x65bmr3833sfkbdlcx3awjf33jl59k37szphnhedha914nl7n8m613vcbnugwdeo6q17pwje5hx2lp1smht == \g\a\c\c\g\2\p\6\h\9\m\0\f\k\3\g\t\t\p\v\f\s\7\0\0\d\m\w\g\b\e\u\j\6\z\2\x\u\d\h\l\p\h\s\w\r\r\o\9\i\r\k\a\g\m\3\7\9\p\h\j\s\6\5\0\r\2\u\g\j\a\q\f\a\e\h\5\8\y\8\n\x\s\i\e\i\n\t\4\j\x\j\d\e\3\m\w\s\p\x\q\l\m\h\f\0\w\i\7\1\x\5\w\q\t\p\9\c\q\j\j\l\p\q\n\0\i\k\k\c\p\6\2\h\i\8\o\y\5\d\s\r\f\b\w\5\r\r\r\5\s\g\b\q\u\t\f\z\n\o\n\u\k\e\0\l\x\5\q\1\p\q\7\t\s\3\o\9\s\g\i\2\9\f\u\z\s\k\v\7\e\f\c\z\9\k\w\o\w\i\m\y\z\u\j\t\r\d\k\n\5\t\w\0\e\9\b\r\r\x\n\z\6\k\9\o\f\y\i\h\c\d\q\2\d\y\z\8\k\p\p\3\q\g\2\6\q\f\2\j\h\q\t\y\m\r\k\n\p\x\r\5\u\4\n\e\v\s\t\i\c\y\w\2\0\f\h\d\g\z\p\6\a\i\h\d\t\3\6\m\1\p\d\2\m\m\x\q\7\j\v\n\s\d\k\y\2\7\e\8\g\n\u\o\k\a\k\9\x\d\2\j\j\a\5\z\5\4\p\w\k\j\5\x\s\m\u\m\n\e\4\o\g\b\4\x\q\4\g\4\s\2\j\0\k\0\g\2\4\9\2\x\w\g\q\s\j\b\4\2\0\0\n\i\b\q\0\5\a\7\l\e\v\9\t\o\2\f\e\q\r\f\q\v\u\q\w\m\6\9\y\z\k\o\f\p\5\a\n\3\2\0\z\9\x\4\c\j\p\n\v\o\s\g\b\z\2\9\9\x\6\5\b\m\r\3\8\3\3\s\f\k\b\d\l\c\x\3\a\w\j\f\3\3\j\l\5\9\k\3\7\s\z\p\h\n\h\e\d\h\a\9\1\4\n\l\7\n\8\m\6\1\3\v\c\b\n\u\g\w\d\e\o\6\q\1\7\p\w\j\e\5\h\x\2\l\p\1\s\m\h\t ]] 00:07:56.427 23:52:02 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:56.427 23:52:02 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:07:56.427 23:52:02 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:07:56.427 23:52:02 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:07:56.427 23:52:02 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:56.427 23:52:02 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:56.428 [2024-11-18 23:52:02.921172] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:07:56.428 [2024-11-18 23:52:02.921327] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61944 ] 00:07:56.428 [2024-11-18 23:52:03.082265] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.685 [2024-11-18 23:52:03.163952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.685 [2024-11-18 23:52:03.318138] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:56.942  [2024-11-18T23:52:04.568Z] Copying: 512/512 [B] (average 500 kBps) 00:07:57.876 00:07:57.877 23:52:04 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 5pkw5yyg00k02580ldmlahfs6jfh7771xm250vbibkuc7sgbav9ycvc3cx745dr0jmi0fx1su9ldge0shsu9a0b8cj02pznc6ao6ugw3iqjqj9drhoo2r163vd4n7b4c6lyt5jwsrb14fmhx42loqmewsdu601ngy2zq2t7gg2npn0oml9glilvt2i2ip6v1e1wcnko9obsen6qnqnt8g8j1celjmpv3fudrm25w8xd9h1nhg88bg9pk12i9iw429e6numusoe9ewoia5773cso7s3024hiomike5qmjssml629pv1p71de0ujp962c4ojeau2zegm5u7tz7ncuj5qrdrskyyq1d6u3yogfpn20w3zwgszwlt6px4ldlp3w3vhovcitcufe1snr8oiyh78b51as1zisvbqstz78mfm3b8cggcxkz02iv8n1q88t5sn9h2wm0pvio87l6yp3u4fsnkl6n904axz1mwkwvopbqkvcudnrbbqkhdelgobwd == \5\p\k\w\5\y\y\g\0\0\k\0\2\5\8\0\l\d\m\l\a\h\f\s\6\j\f\h\7\7\7\1\x\m\2\5\0\v\b\i\b\k\u\c\7\s\g\b\a\v\9\y\c\v\c\3\c\x\7\4\5\d\r\0\j\m\i\0\f\x\1\s\u\9\l\d\g\e\0\s\h\s\u\9\a\0\b\8\c\j\0\2\p\z\n\c\6\a\o\6\u\g\w\3\i\q\j\q\j\9\d\r\h\o\o\2\r\1\6\3\v\d\4\n\7\b\4\c\6\l\y\t\5\j\w\s\r\b\1\4\f\m\h\x\4\2\l\o\q\m\e\w\s\d\u\6\0\1\n\g\y\2\z\q\2\t\7\g\g\2\n\p\n\0\o\m\l\9\g\l\i\l\v\t\2\i\2\i\p\6\v\1\e\1\w\c\n\k\o\9\o\b\s\e\n\6\q\n\q\n\t\8\g\8\j\1\c\e\l\j\m\p\v\3\f\u\d\r\m\2\5\w\8\x\d\9\h\1\n\h\g\8\8\b\g\9\p\k\1\2\i\9\i\w\4\2\9\e\6\n\u\m\u\s\o\e\9\e\w\o\i\a\5\7\7\3\c\s\o\7\s\3\0\2\4\h\i\o\m\i\k\e\5\q\m\j\s\s\m\l\6\2\9\p\v\1\p\7\1\d\e\0\u\j\p\9\6\2\c\4\o\j\e\a\u\2\z\e\g\m\5\u\7\t\z\7\n\c\u\j\5\q\r\d\r\s\k\y\y\q\1\d\6\u\3\y\o\g\f\p\n\2\0\w\3\z\w\g\s\z\w\l\t\6\p\x\4\l\d\l\p\3\w\3\v\h\o\v\c\i\t\c\u\f\e\1\s\n\r\8\o\i\y\h\7\8\b\5\1\a\s\1\z\i\s\v\b\q\s\t\z\7\8\m\f\m\3\b\8\c\g\g\c\x\k\z\0\2\i\v\8\n\1\q\8\8\t\5\s\n\9\h\2\w\m\0\p\v\i\o\8\7\l\6\y\p\3\u\4\f\s\n\k\l\6\n\9\0\4\a\x\z\1\m\w\k\w\v\o\p\b\q\k\v\c\u\d\n\r\b\b\q\k\h\d\e\l\g\o\b\w\d ]] 00:07:57.877 23:52:04 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:57.877 23:52:04 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:57.877 [2024-11-18 23:52:04.389834] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:07:57.877 [2024-11-18 23:52:04.390000] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61960 ] 00:07:57.877 [2024-11-18 23:52:04.556852] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.134 [2024-11-18 23:52:04.643638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.134 [2024-11-18 23:52:04.792567] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:58.392  [2024-11-18T23:52:06.018Z] Copying: 512/512 [B] (average 500 kBps) 00:07:59.326 00:07:59.326 23:52:05 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 5pkw5yyg00k02580ldmlahfs6jfh7771xm250vbibkuc7sgbav9ycvc3cx745dr0jmi0fx1su9ldge0shsu9a0b8cj02pznc6ao6ugw3iqjqj9drhoo2r163vd4n7b4c6lyt5jwsrb14fmhx42loqmewsdu601ngy2zq2t7gg2npn0oml9glilvt2i2ip6v1e1wcnko9obsen6qnqnt8g8j1celjmpv3fudrm25w8xd9h1nhg88bg9pk12i9iw429e6numusoe9ewoia5773cso7s3024hiomike5qmjssml629pv1p71de0ujp962c4ojeau2zegm5u7tz7ncuj5qrdrskyyq1d6u3yogfpn20w3zwgszwlt6px4ldlp3w3vhovcitcufe1snr8oiyh78b51as1zisvbqstz78mfm3b8cggcxkz02iv8n1q88t5sn9h2wm0pvio87l6yp3u4fsnkl6n904axz1mwkwvopbqkvcudnrbbqkhdelgobwd == \5\p\k\w\5\y\y\g\0\0\k\0\2\5\8\0\l\d\m\l\a\h\f\s\6\j\f\h\7\7\7\1\x\m\2\5\0\v\b\i\b\k\u\c\7\s\g\b\a\v\9\y\c\v\c\3\c\x\7\4\5\d\r\0\j\m\i\0\f\x\1\s\u\9\l\d\g\e\0\s\h\s\u\9\a\0\b\8\c\j\0\2\p\z\n\c\6\a\o\6\u\g\w\3\i\q\j\q\j\9\d\r\h\o\o\2\r\1\6\3\v\d\4\n\7\b\4\c\6\l\y\t\5\j\w\s\r\b\1\4\f\m\h\x\4\2\l\o\q\m\e\w\s\d\u\6\0\1\n\g\y\2\z\q\2\t\7\g\g\2\n\p\n\0\o\m\l\9\g\l\i\l\v\t\2\i\2\i\p\6\v\1\e\1\w\c\n\k\o\9\o\b\s\e\n\6\q\n\q\n\t\8\g\8\j\1\c\e\l\j\m\p\v\3\f\u\d\r\m\2\5\w\8\x\d\9\h\1\n\h\g\8\8\b\g\9\p\k\1\2\i\9\i\w\4\2\9\e\6\n\u\m\u\s\o\e\9\e\w\o\i\a\5\7\7\3\c\s\o\7\s\3\0\2\4\h\i\o\m\i\k\e\5\q\m\j\s\s\m\l\6\2\9\p\v\1\p\7\1\d\e\0\u\j\p\9\6\2\c\4\o\j\e\a\u\2\z\e\g\m\5\u\7\t\z\7\n\c\u\j\5\q\r\d\r\s\k\y\y\q\1\d\6\u\3\y\o\g\f\p\n\2\0\w\3\z\w\g\s\z\w\l\t\6\p\x\4\l\d\l\p\3\w\3\v\h\o\v\c\i\t\c\u\f\e\1\s\n\r\8\o\i\y\h\7\8\b\5\1\a\s\1\z\i\s\v\b\q\s\t\z\7\8\m\f\m\3\b\8\c\g\g\c\x\k\z\0\2\i\v\8\n\1\q\8\8\t\5\s\n\9\h\2\w\m\0\p\v\i\o\8\7\l\6\y\p\3\u\4\f\s\n\k\l\6\n\9\0\4\a\x\z\1\m\w\k\w\v\o\p\b\q\k\v\c\u\d\n\r\b\b\q\k\h\d\e\l\g\o\b\w\d ]] 00:07:59.326 23:52:05 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:59.326 23:52:05 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:59.326 [2024-11-18 23:52:05.860314] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:07:59.326 [2024-11-18 23:52:05.860518] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61987 ] 00:07:59.584 [2024-11-18 23:52:06.037160] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.584 [2024-11-18 23:52:06.134359] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.842 [2024-11-18 23:52:06.286058] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:59.842  [2024-11-18T23:52:07.489Z] Copying: 512/512 [B] (average 166 kBps) 00:08:00.797 00:08:00.797 23:52:07 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 5pkw5yyg00k02580ldmlahfs6jfh7771xm250vbibkuc7sgbav9ycvc3cx745dr0jmi0fx1su9ldge0shsu9a0b8cj02pznc6ao6ugw3iqjqj9drhoo2r163vd4n7b4c6lyt5jwsrb14fmhx42loqmewsdu601ngy2zq2t7gg2npn0oml9glilvt2i2ip6v1e1wcnko9obsen6qnqnt8g8j1celjmpv3fudrm25w8xd9h1nhg88bg9pk12i9iw429e6numusoe9ewoia5773cso7s3024hiomike5qmjssml629pv1p71de0ujp962c4ojeau2zegm5u7tz7ncuj5qrdrskyyq1d6u3yogfpn20w3zwgszwlt6px4ldlp3w3vhovcitcufe1snr8oiyh78b51as1zisvbqstz78mfm3b8cggcxkz02iv8n1q88t5sn9h2wm0pvio87l6yp3u4fsnkl6n904axz1mwkwvopbqkvcudnrbbqkhdelgobwd == \5\p\k\w\5\y\y\g\0\0\k\0\2\5\8\0\l\d\m\l\a\h\f\s\6\j\f\h\7\7\7\1\x\m\2\5\0\v\b\i\b\k\u\c\7\s\g\b\a\v\9\y\c\v\c\3\c\x\7\4\5\d\r\0\j\m\i\0\f\x\1\s\u\9\l\d\g\e\0\s\h\s\u\9\a\0\b\8\c\j\0\2\p\z\n\c\6\a\o\6\u\g\w\3\i\q\j\q\j\9\d\r\h\o\o\2\r\1\6\3\v\d\4\n\7\b\4\c\6\l\y\t\5\j\w\s\r\b\1\4\f\m\h\x\4\2\l\o\q\m\e\w\s\d\u\6\0\1\n\g\y\2\z\q\2\t\7\g\g\2\n\p\n\0\o\m\l\9\g\l\i\l\v\t\2\i\2\i\p\6\v\1\e\1\w\c\n\k\o\9\o\b\s\e\n\6\q\n\q\n\t\8\g\8\j\1\c\e\l\j\m\p\v\3\f\u\d\r\m\2\5\w\8\x\d\9\h\1\n\h\g\8\8\b\g\9\p\k\1\2\i\9\i\w\4\2\9\e\6\n\u\m\u\s\o\e\9\e\w\o\i\a\5\7\7\3\c\s\o\7\s\3\0\2\4\h\i\o\m\i\k\e\5\q\m\j\s\s\m\l\6\2\9\p\v\1\p\7\1\d\e\0\u\j\p\9\6\2\c\4\o\j\e\a\u\2\z\e\g\m\5\u\7\t\z\7\n\c\u\j\5\q\r\d\r\s\k\y\y\q\1\d\6\u\3\y\o\g\f\p\n\2\0\w\3\z\w\g\s\z\w\l\t\6\p\x\4\l\d\l\p\3\w\3\v\h\o\v\c\i\t\c\u\f\e\1\s\n\r\8\o\i\y\h\7\8\b\5\1\a\s\1\z\i\s\v\b\q\s\t\z\7\8\m\f\m\3\b\8\c\g\g\c\x\k\z\0\2\i\v\8\n\1\q\8\8\t\5\s\n\9\h\2\w\m\0\p\v\i\o\8\7\l\6\y\p\3\u\4\f\s\n\k\l\6\n\9\0\4\a\x\z\1\m\w\k\w\v\o\p\b\q\k\v\c\u\d\n\r\b\b\q\k\h\d\e\l\g\o\b\w\d ]] 00:08:00.797 23:52:07 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:00.797 23:52:07 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:00.797 [2024-11-18 23:52:07.286603] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:08:00.797 [2024-11-18 23:52:07.286782] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62003 ] 00:08:00.797 [2024-11-18 23:52:07.449852] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.059 [2024-11-18 23:52:07.551684] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.059 [2024-11-18 23:52:07.709468] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:01.317  [2024-11-18T23:52:08.944Z] Copying: 512/512 [B] (average 125 kBps) 00:08:02.252 00:08:02.252 23:52:08 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 5pkw5yyg00k02580ldmlahfs6jfh7771xm250vbibkuc7sgbav9ycvc3cx745dr0jmi0fx1su9ldge0shsu9a0b8cj02pznc6ao6ugw3iqjqj9drhoo2r163vd4n7b4c6lyt5jwsrb14fmhx42loqmewsdu601ngy2zq2t7gg2npn0oml9glilvt2i2ip6v1e1wcnko9obsen6qnqnt8g8j1celjmpv3fudrm25w8xd9h1nhg88bg9pk12i9iw429e6numusoe9ewoia5773cso7s3024hiomike5qmjssml629pv1p71de0ujp962c4ojeau2zegm5u7tz7ncuj5qrdrskyyq1d6u3yogfpn20w3zwgszwlt6px4ldlp3w3vhovcitcufe1snr8oiyh78b51as1zisvbqstz78mfm3b8cggcxkz02iv8n1q88t5sn9h2wm0pvio87l6yp3u4fsnkl6n904axz1mwkwvopbqkvcudnrbbqkhdelgobwd == \5\p\k\w\5\y\y\g\0\0\k\0\2\5\8\0\l\d\m\l\a\h\f\s\6\j\f\h\7\7\7\1\x\m\2\5\0\v\b\i\b\k\u\c\7\s\g\b\a\v\9\y\c\v\c\3\c\x\7\4\5\d\r\0\j\m\i\0\f\x\1\s\u\9\l\d\g\e\0\s\h\s\u\9\a\0\b\8\c\j\0\2\p\z\n\c\6\a\o\6\u\g\w\3\i\q\j\q\j\9\d\r\h\o\o\2\r\1\6\3\v\d\4\n\7\b\4\c\6\l\y\t\5\j\w\s\r\b\1\4\f\m\h\x\4\2\l\o\q\m\e\w\s\d\u\6\0\1\n\g\y\2\z\q\2\t\7\g\g\2\n\p\n\0\o\m\l\9\g\l\i\l\v\t\2\i\2\i\p\6\v\1\e\1\w\c\n\k\o\9\o\b\s\e\n\6\q\n\q\n\t\8\g\8\j\1\c\e\l\j\m\p\v\3\f\u\d\r\m\2\5\w\8\x\d\9\h\1\n\h\g\8\8\b\g\9\p\k\1\2\i\9\i\w\4\2\9\e\6\n\u\m\u\s\o\e\9\e\w\o\i\a\5\7\7\3\c\s\o\7\s\3\0\2\4\h\i\o\m\i\k\e\5\q\m\j\s\s\m\l\6\2\9\p\v\1\p\7\1\d\e\0\u\j\p\9\6\2\c\4\o\j\e\a\u\2\z\e\g\m\5\u\7\t\z\7\n\c\u\j\5\q\r\d\r\s\k\y\y\q\1\d\6\u\3\y\o\g\f\p\n\2\0\w\3\z\w\g\s\z\w\l\t\6\p\x\4\l\d\l\p\3\w\3\v\h\o\v\c\i\t\c\u\f\e\1\s\n\r\8\o\i\y\h\7\8\b\5\1\a\s\1\z\i\s\v\b\q\s\t\z\7\8\m\f\m\3\b\8\c\g\g\c\x\k\z\0\2\i\v\8\n\1\q\8\8\t\5\s\n\9\h\2\w\m\0\p\v\i\o\8\7\l\6\y\p\3\u\4\f\s\n\k\l\6\n\9\0\4\a\x\z\1\m\w\k\w\v\o\p\b\q\k\v\c\u\d\n\r\b\b\q\k\h\d\e\l\g\o\b\w\d ]] 00:08:02.252 00:08:02.252 real 0m11.506s 00:08:02.252 user 0m9.236s 00:08:02.252 sys 0m6.336s 00:08:02.252 ************************************ 00:08:02.252 END TEST dd_flags_misc 00:08:02.252 ************************************ 00:08:02.252 23:52:08 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:02.252 23:52:08 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:08:02.252 23:52:08 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:08:02.252 23:52:08 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:08:02.252 * Second test run, disabling liburing, forcing AIO 00:08:02.252 23:52:08 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:08:02.252 23:52:08 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:08:02.252 23:52:08 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:02.252 23:52:08 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:02.252 23:52:08 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:02.252 ************************************ 00:08:02.252 START TEST dd_flag_append_forced_aio 00:08:02.252 ************************************ 00:08:02.252 23:52:08 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1129 -- # append 00:08:02.252 23:52:08 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:08:02.252 23:52:08 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:08:02.252 23:52:08 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:08:02.252 23:52:08 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:02.252 23:52:08 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:02.252 23:52:08 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=x7n5bxmmejt7926bzs4xc4jljuy6ke75 00:08:02.252 23:52:08 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:08:02.252 23:52:08 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:02.252 23:52:08 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:02.252 23:52:08 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=e3tio6blqwhwwocdupa4d355btk3rpoy 00:08:02.252 23:52:08 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s x7n5bxmmejt7926bzs4xc4jljuy6ke75 00:08:02.252 23:52:08 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s e3tio6blqwhwwocdupa4d355btk3rpoy 00:08:02.252 23:52:08 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:08:02.252 [2024-11-18 23:52:08.817484] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:08:02.252 [2024-11-18 23:52:08.817677] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62049 ] 00:08:02.510 [2024-11-18 23:52:08.977741] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.510 [2024-11-18 23:52:09.058989] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.768 [2024-11-18 23:52:09.213631] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:02.768  [2024-11-18T23:52:10.396Z] Copying: 32/32 [B] (average 31 kBps) 00:08:03.704 00:08:03.704 23:52:10 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ e3tio6blqwhwwocdupa4d355btk3rpoyx7n5bxmmejt7926bzs4xc4jljuy6ke75 == \e\3\t\i\o\6\b\l\q\w\h\w\w\o\c\d\u\p\a\4\d\3\5\5\b\t\k\3\r\p\o\y\x\7\n\5\b\x\m\m\e\j\t\7\9\2\6\b\z\s\4\x\c\4\j\l\j\u\y\6\k\e\7\5 ]] 00:08:03.704 00:08:03.704 real 0m1.388s 00:08:03.704 user 0m1.101s 00:08:03.704 sys 0m0.167s 00:08:03.704 23:52:10 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:03.704 23:52:10 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:03.704 ************************************ 00:08:03.704 END TEST dd_flag_append_forced_aio 00:08:03.704 ************************************ 00:08:03.704 23:52:10 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:08:03.704 23:52:10 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:03.704 23:52:10 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:03.704 23:52:10 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:03.704 ************************************ 00:08:03.704 START TEST dd_flag_directory_forced_aio 00:08:03.704 ************************************ 00:08:03.704 23:52:10 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1129 -- # directory 00:08:03.704 23:52:10 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:03.704 23:52:10 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:08:03.704 23:52:10 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:03.704 23:52:10 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:03.704 23:52:10 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:03.704 23:52:10 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:03.704 23:52:10 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:03.704 23:52:10 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:03.704 23:52:10 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:03.704 23:52:10 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:03.704 23:52:10 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:03.704 23:52:10 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:03.704 [2024-11-18 23:52:10.255965] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:08:03.704 [2024-11-18 23:52:10.256145] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62082 ] 00:08:03.962 [2024-11-18 23:52:10.414383] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.962 [2024-11-18 23:52:10.495955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.963 [2024-11-18 23:52:10.646482] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:04.221 [2024-11-18 23:52:10.743021] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:04.221 [2024-11-18 23:52:10.743105] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:04.221 [2024-11-18 23:52:10.743128] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:04.788 [2024-11-18 23:52:11.309172] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:05.047 23:52:11 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # es=236 00:08:05.047 23:52:11 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:05.047 23:52:11 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@664 -- # es=108 00:08:05.047 23:52:11 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:08:05.047 23:52:11 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:08:05.047 23:52:11 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:05.047 23:52:11 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:05.047 23:52:11 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:08:05.047 23:52:11 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:05.047 23:52:11 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:05.048 23:52:11 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:05.048 23:52:11 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:05.048 23:52:11 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:05.048 23:52:11 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:05.048 23:52:11 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:05.048 23:52:11 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:05.048 23:52:11 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:05.048 23:52:11 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:05.048 [2024-11-18 23:52:11.638112] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:08:05.048 [2024-11-18 23:52:11.638334] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62098 ] 00:08:05.305 [2024-11-18 23:52:11.815030] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.305 [2024-11-18 23:52:11.896273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.564 [2024-11-18 23:52:12.041375] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:05.564 [2024-11-18 23:52:12.121288] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:05.564 [2024-11-18 23:52:12.121378] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:05.564 [2024-11-18 23:52:12.121403] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:06.131 [2024-11-18 23:52:12.691208] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:06.390 23:52:12 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # es=236 00:08:06.390 23:52:12 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:06.390 23:52:12 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@664 -- # es=108 00:08:06.390 23:52:12 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:08:06.390 23:52:12 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:08:06.390 23:52:12 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:06.390 00:08:06.390 real 0m2.759s 00:08:06.390 user 0m2.191s 00:08:06.390 sys 0m0.352s 00:08:06.390 23:52:12 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:06.390 ************************************ 00:08:06.390 END TEST dd_flag_directory_forced_aio 00:08:06.390 ************************************ 00:08:06.390 23:52:12 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:06.390 23:52:12 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:08:06.390 23:52:12 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:06.390 23:52:12 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:06.390 23:52:12 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:06.390 ************************************ 00:08:06.390 START TEST dd_flag_nofollow_forced_aio 00:08:06.390 ************************************ 00:08:06.390 23:52:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1129 -- # nofollow 00:08:06.390 23:52:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:06.390 23:52:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:06.390 23:52:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:06.390 23:52:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:06.390 23:52:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:06.390 23:52:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:08:06.390 23:52:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:06.390 23:52:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:06.390 23:52:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:06.390 23:52:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:06.390 23:52:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:06.390 23:52:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:06.390 23:52:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:06.390 23:52:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:06.390 23:52:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:06.390 23:52:12 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:06.649 [2024-11-18 23:52:13.096519] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:08:06.649 [2024-11-18 23:52:13.096704] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62144 ] 00:08:06.649 [2024-11-18 23:52:13.276966] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.909 [2024-11-18 23:52:13.367147] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.909 [2024-11-18 23:52:13.511650] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:06.909 [2024-11-18 23:52:13.593918] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:08:06.909 [2024-11-18 23:52:13.594018] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:08:06.909 [2024-11-18 23:52:13.594042] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:07.845 [2024-11-18 23:52:14.187296] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:07.845 23:52:14 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # es=216 00:08:07.845 23:52:14 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:07.845 23:52:14 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@664 -- # es=88 00:08:07.845 23:52:14 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:08:07.845 23:52:14 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:08:07.845 23:52:14 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:07.845 23:52:14 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:07.845 23:52:14 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:08:07.845 23:52:14 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:07.845 23:52:14 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:07.845 23:52:14 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:07.845 23:52:14 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:07.845 23:52:14 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:07.845 23:52:14 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:07.845 23:52:14 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:07.845 23:52:14 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:07.845 23:52:14 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:07.845 23:52:14 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:07.845 [2024-11-18 23:52:14.524730] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:08:07.845 [2024-11-18 23:52:14.524893] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62160 ] 00:08:08.103 [2024-11-18 23:52:14.702278] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.362 [2024-11-18 23:52:14.798312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.362 [2024-11-18 23:52:14.952548] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:08.362 [2024-11-18 23:52:15.039718] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:08:08.362 [2024-11-18 23:52:15.039798] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:08:08.362 [2024-11-18 23:52:15.039824] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:09.297 [2024-11-18 23:52:15.685301] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:09.297 23:52:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # es=216 00:08:09.297 23:52:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:09.297 23:52:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@664 -- # es=88 00:08:09.297 23:52:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:08:09.297 23:52:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:08:09.297 23:52:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:09.297 23:52:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:08:09.297 23:52:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:09.297 23:52:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:09.297 23:52:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:09.556 [2024-11-18 23:52:16.050157] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:08:09.556 [2024-11-18 23:52:16.050668] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62180 ] 00:08:09.556 [2024-11-18 23:52:16.225384] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.815 [2024-11-18 23:52:16.309981] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.815 [2024-11-18 23:52:16.464452] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:10.073  [2024-11-18T23:52:17.702Z] Copying: 512/512 [B] (average 500 kBps) 00:08:11.010 00:08:11.011 23:52:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ sfy6tlufx3a2kqg2gjidh25c1dj4iazy9vvio8t38nj5c12hxw0p59lpji6vhfrgfuceqvce6i0n58tnb9ij3iqwyd9kn6v1g8umb1afzqzrx73hi8hckxf120mc6j4bo9mi8q4x46n6v53zwzunl5z2dk80sbxe5blqv67t3jwjrw30ny9geuuhu2mn0qtsre2b4i3hxkywlawkpm38lqysdtph2pywvfu78s9ujlihvdwmynyliem7d177bv85g79iwu8wlme1zeo8luj6cytgxbico0yggqhvz98xopwd9i9oo5l932kj7447waj3hijq80oo5tg6k23ogqcz9lwpc18s6tsiae380au1an40qwzl484pc9pjhwbi5vfp4nzb5pbf081wzp1mnhniq06yingsmcf5c7w3yllsgvfk1cd2noa4hr501s1u94z0fxf0gks5wn4e4ghfp8q5t566ugknvodggl8avfojjv672882wllgmod4epwuaibe == \s\f\y\6\t\l\u\f\x\3\a\2\k\q\g\2\g\j\i\d\h\2\5\c\1\d\j\4\i\a\z\y\9\v\v\i\o\8\t\3\8\n\j\5\c\1\2\h\x\w\0\p\5\9\l\p\j\i\6\v\h\f\r\g\f\u\c\e\q\v\c\e\6\i\0\n\5\8\t\n\b\9\i\j\3\i\q\w\y\d\9\k\n\6\v\1\g\8\u\m\b\1\a\f\z\q\z\r\x\7\3\h\i\8\h\c\k\x\f\1\2\0\m\c\6\j\4\b\o\9\m\i\8\q\4\x\4\6\n\6\v\5\3\z\w\z\u\n\l\5\z\2\d\k\8\0\s\b\x\e\5\b\l\q\v\6\7\t\3\j\w\j\r\w\3\0\n\y\9\g\e\u\u\h\u\2\m\n\0\q\t\s\r\e\2\b\4\i\3\h\x\k\y\w\l\a\w\k\p\m\3\8\l\q\y\s\d\t\p\h\2\p\y\w\v\f\u\7\8\s\9\u\j\l\i\h\v\d\w\m\y\n\y\l\i\e\m\7\d\1\7\7\b\v\8\5\g\7\9\i\w\u\8\w\l\m\e\1\z\e\o\8\l\u\j\6\c\y\t\g\x\b\i\c\o\0\y\g\g\q\h\v\z\9\8\x\o\p\w\d\9\i\9\o\o\5\l\9\3\2\k\j\7\4\4\7\w\a\j\3\h\i\j\q\8\0\o\o\5\t\g\6\k\2\3\o\g\q\c\z\9\l\w\p\c\1\8\s\6\t\s\i\a\e\3\8\0\a\u\1\a\n\4\0\q\w\z\l\4\8\4\p\c\9\p\j\h\w\b\i\5\v\f\p\4\n\z\b\5\p\b\f\0\8\1\w\z\p\1\m\n\h\n\i\q\0\6\y\i\n\g\s\m\c\f\5\c\7\w\3\y\l\l\s\g\v\f\k\1\c\d\2\n\o\a\4\h\r\5\0\1\s\1\u\9\4\z\0\f\x\f\0\g\k\s\5\w\n\4\e\4\g\h\f\p\8\q\5\t\5\6\6\u\g\k\n\v\o\d\g\g\l\8\a\v\f\o\j\j\v\6\7\2\8\8\2\w\l\l\g\m\o\d\4\e\p\w\u\a\i\b\e ]] 00:08:11.011 00:08:11.011 real 0m4.393s 00:08:11.011 user 0m3.485s 00:08:11.011 sys 0m0.561s 00:08:11.011 ************************************ 00:08:11.011 END TEST dd_flag_nofollow_forced_aio 00:08:11.011 ************************************ 00:08:11.011 23:52:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:11.011 23:52:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:11.011 23:52:17 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:08:11.011 23:52:17 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:11.011 23:52:17 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:11.011 23:52:17 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:11.011 ************************************ 00:08:11.011 START TEST dd_flag_noatime_forced_aio 00:08:11.011 ************************************ 00:08:11.011 23:52:17 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1129 -- # noatime 00:08:11.011 23:52:17 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:08:11.011 23:52:17 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:08:11.011 23:52:17 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:08:11.011 23:52:17 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:11.011 23:52:17 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:11.011 23:52:17 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:11.011 23:52:17 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1731973936 00:08:11.011 23:52:17 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:11.011 23:52:17 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1731973937 00:08:11.011 23:52:17 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:08:11.947 23:52:18 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:11.947 [2024-11-18 23:52:18.565118] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:08:11.948 [2024-11-18 23:52:18.566049] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62232 ] 00:08:12.206 [2024-11-18 23:52:18.743441] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.206 [2024-11-18 23:52:18.827119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.465 [2024-11-18 23:52:18.986045] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:12.465  [2024-11-18T23:52:20.090Z] Copying: 512/512 [B] (average 500 kBps) 00:08:13.398 00:08:13.398 23:52:19 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:13.398 23:52:19 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1731973936 )) 00:08:13.398 23:52:19 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:13.398 23:52:19 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1731973937 )) 00:08:13.398 23:52:19 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:13.398 [2024-11-18 23:52:19.984332] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:08:13.398 [2024-11-18 23:52:19.984505] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62256 ] 00:08:13.656 [2024-11-18 23:52:20.162017] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.656 [2024-11-18 23:52:20.250520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.914 [2024-11-18 23:52:20.401025] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:13.914  [2024-11-18T23:52:21.542Z] Copying: 512/512 [B] (average 500 kBps) 00:08:14.850 00:08:14.850 23:52:21 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:14.850 23:52:21 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1731973940 )) 00:08:14.850 00:08:14.850 real 0m3.859s 00:08:14.850 user 0m2.248s 00:08:14.850 sys 0m0.371s 00:08:14.850 23:52:21 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:14.850 23:52:21 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:14.850 ************************************ 00:08:14.850 END TEST dd_flag_noatime_forced_aio 00:08:14.850 ************************************ 00:08:14.850 23:52:21 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:08:14.850 23:52:21 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:14.850 23:52:21 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:14.850 23:52:21 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:14.850 ************************************ 00:08:14.850 START TEST dd_flags_misc_forced_aio 00:08:14.850 ************************************ 00:08:14.850 23:52:21 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1129 -- # io 00:08:14.850 23:52:21 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:08:14.850 23:52:21 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:08:14.850 23:52:21 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:08:14.850 23:52:21 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:14.850 23:52:21 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:08:14.850 23:52:21 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:14.850 23:52:21 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:14.850 23:52:21 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:14.850 23:52:21 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:14.850 [2024-11-18 23:52:21.457686] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:08:14.850 [2024-11-18 23:52:21.457848] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62294 ] 00:08:15.109 [2024-11-18 23:52:21.634724] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.109 [2024-11-18 23:52:21.715336] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.368 [2024-11-18 23:52:21.869962] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:15.368  [2024-11-18T23:52:23.006Z] Copying: 512/512 [B] (average 500 kBps) 00:08:16.314 00:08:16.314 23:52:22 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ we9fbw388xxphic5o719r2orxek6iq1y34apzcbat5qjty43talsr8mo27l8ab0e1fj41tpzsw0t03u5xvz359xovw3er4jgh66p3agnopcu8bie0w1ljil6fw09lg6dbkfi049zqe3ndjdnx8pdgq4c1aagb17wbv1psdzs9xn5vsatct39e3f2cql8jrkawkssyjpraxy3mce6vd59ze3hyp249blxyzaj1vi5mk0ryjyef8ot015zokrp83tnfa4zfpdseu76xoqp1j0zyhkiywvdbtzetju4fzdzplnsbj3rfsihlpwhk82rtoh0n7ft01ess0kyk7d4qcm57321266liv7e9k6777uor44m8xs14kf0aufbtllpuosgzztusbdb8o5bkgug0o3z9nwbiffv0659kvbmvlulqayi25x3jazm7sfgiis64th1nn0ire81adzeikyu1mrjp5p9xwvfasth87xei319p7kec0o10x8dm3w4qm7gtkwe == \w\e\9\f\b\w\3\8\8\x\x\p\h\i\c\5\o\7\1\9\r\2\o\r\x\e\k\6\i\q\1\y\3\4\a\p\z\c\b\a\t\5\q\j\t\y\4\3\t\a\l\s\r\8\m\o\2\7\l\8\a\b\0\e\1\f\j\4\1\t\p\z\s\w\0\t\0\3\u\5\x\v\z\3\5\9\x\o\v\w\3\e\r\4\j\g\h\6\6\p\3\a\g\n\o\p\c\u\8\b\i\e\0\w\1\l\j\i\l\6\f\w\0\9\l\g\6\d\b\k\f\i\0\4\9\z\q\e\3\n\d\j\d\n\x\8\p\d\g\q\4\c\1\a\a\g\b\1\7\w\b\v\1\p\s\d\z\s\9\x\n\5\v\s\a\t\c\t\3\9\e\3\f\2\c\q\l\8\j\r\k\a\w\k\s\s\y\j\p\r\a\x\y\3\m\c\e\6\v\d\5\9\z\e\3\h\y\p\2\4\9\b\l\x\y\z\a\j\1\v\i\5\m\k\0\r\y\j\y\e\f\8\o\t\0\1\5\z\o\k\r\p\8\3\t\n\f\a\4\z\f\p\d\s\e\u\7\6\x\o\q\p\1\j\0\z\y\h\k\i\y\w\v\d\b\t\z\e\t\j\u\4\f\z\d\z\p\l\n\s\b\j\3\r\f\s\i\h\l\p\w\h\k\8\2\r\t\o\h\0\n\7\f\t\0\1\e\s\s\0\k\y\k\7\d\4\q\c\m\5\7\3\2\1\2\6\6\l\i\v\7\e\9\k\6\7\7\7\u\o\r\4\4\m\8\x\s\1\4\k\f\0\a\u\f\b\t\l\l\p\u\o\s\g\z\z\t\u\s\b\d\b\8\o\5\b\k\g\u\g\0\o\3\z\9\n\w\b\i\f\f\v\0\6\5\9\k\v\b\m\v\l\u\l\q\a\y\i\2\5\x\3\j\a\z\m\7\s\f\g\i\i\s\6\4\t\h\1\n\n\0\i\r\e\8\1\a\d\z\e\i\k\y\u\1\m\r\j\p\5\p\9\x\w\v\f\a\s\t\h\8\7\x\e\i\3\1\9\p\7\k\e\c\0\o\1\0\x\8\d\m\3\w\4\q\m\7\g\t\k\w\e ]] 00:08:16.314 23:52:22 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:16.314 23:52:22 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:16.314 [2024-11-18 23:52:22.880546] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:08:16.314 [2024-11-18 23:52:22.880931] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62308 ] 00:08:16.572 [2024-11-18 23:52:23.058193] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.572 [2024-11-18 23:52:23.139283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.831 [2024-11-18 23:52:23.289828] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:16.831  [2024-11-18T23:52:24.458Z] Copying: 512/512 [B] (average 500 kBps) 00:08:17.766 00:08:17.767 23:52:24 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ we9fbw388xxphic5o719r2orxek6iq1y34apzcbat5qjty43talsr8mo27l8ab0e1fj41tpzsw0t03u5xvz359xovw3er4jgh66p3agnopcu8bie0w1ljil6fw09lg6dbkfi049zqe3ndjdnx8pdgq4c1aagb17wbv1psdzs9xn5vsatct39e3f2cql8jrkawkssyjpraxy3mce6vd59ze3hyp249blxyzaj1vi5mk0ryjyef8ot015zokrp83tnfa4zfpdseu76xoqp1j0zyhkiywvdbtzetju4fzdzplnsbj3rfsihlpwhk82rtoh0n7ft01ess0kyk7d4qcm57321266liv7e9k6777uor44m8xs14kf0aufbtllpuosgzztusbdb8o5bkgug0o3z9nwbiffv0659kvbmvlulqayi25x3jazm7sfgiis64th1nn0ire81adzeikyu1mrjp5p9xwvfasth87xei319p7kec0o10x8dm3w4qm7gtkwe == \w\e\9\f\b\w\3\8\8\x\x\p\h\i\c\5\o\7\1\9\r\2\o\r\x\e\k\6\i\q\1\y\3\4\a\p\z\c\b\a\t\5\q\j\t\y\4\3\t\a\l\s\r\8\m\o\2\7\l\8\a\b\0\e\1\f\j\4\1\t\p\z\s\w\0\t\0\3\u\5\x\v\z\3\5\9\x\o\v\w\3\e\r\4\j\g\h\6\6\p\3\a\g\n\o\p\c\u\8\b\i\e\0\w\1\l\j\i\l\6\f\w\0\9\l\g\6\d\b\k\f\i\0\4\9\z\q\e\3\n\d\j\d\n\x\8\p\d\g\q\4\c\1\a\a\g\b\1\7\w\b\v\1\p\s\d\z\s\9\x\n\5\v\s\a\t\c\t\3\9\e\3\f\2\c\q\l\8\j\r\k\a\w\k\s\s\y\j\p\r\a\x\y\3\m\c\e\6\v\d\5\9\z\e\3\h\y\p\2\4\9\b\l\x\y\z\a\j\1\v\i\5\m\k\0\r\y\j\y\e\f\8\o\t\0\1\5\z\o\k\r\p\8\3\t\n\f\a\4\z\f\p\d\s\e\u\7\6\x\o\q\p\1\j\0\z\y\h\k\i\y\w\v\d\b\t\z\e\t\j\u\4\f\z\d\z\p\l\n\s\b\j\3\r\f\s\i\h\l\p\w\h\k\8\2\r\t\o\h\0\n\7\f\t\0\1\e\s\s\0\k\y\k\7\d\4\q\c\m\5\7\3\2\1\2\6\6\l\i\v\7\e\9\k\6\7\7\7\u\o\r\4\4\m\8\x\s\1\4\k\f\0\a\u\f\b\t\l\l\p\u\o\s\g\z\z\t\u\s\b\d\b\8\o\5\b\k\g\u\g\0\o\3\z\9\n\w\b\i\f\f\v\0\6\5\9\k\v\b\m\v\l\u\l\q\a\y\i\2\5\x\3\j\a\z\m\7\s\f\g\i\i\s\6\4\t\h\1\n\n\0\i\r\e\8\1\a\d\z\e\i\k\y\u\1\m\r\j\p\5\p\9\x\w\v\f\a\s\t\h\8\7\x\e\i\3\1\9\p\7\k\e\c\0\o\1\0\x\8\d\m\3\w\4\q\m\7\g\t\k\w\e ]] 00:08:17.767 23:52:24 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:17.767 23:52:24 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:17.767 [2024-11-18 23:52:24.304823] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:08:17.767 [2024-11-18 23:52:24.305005] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62328 ] 00:08:18.025 [2024-11-18 23:52:24.485109] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.025 [2024-11-18 23:52:24.575760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.285 [2024-11-18 23:52:24.721727] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:18.285  [2024-11-18T23:52:25.914Z] Copying: 512/512 [B] (average 166 kBps) 00:08:19.222 00:08:19.222 23:52:25 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ we9fbw388xxphic5o719r2orxek6iq1y34apzcbat5qjty43talsr8mo27l8ab0e1fj41tpzsw0t03u5xvz359xovw3er4jgh66p3agnopcu8bie0w1ljil6fw09lg6dbkfi049zqe3ndjdnx8pdgq4c1aagb17wbv1psdzs9xn5vsatct39e3f2cql8jrkawkssyjpraxy3mce6vd59ze3hyp249blxyzaj1vi5mk0ryjyef8ot015zokrp83tnfa4zfpdseu76xoqp1j0zyhkiywvdbtzetju4fzdzplnsbj3rfsihlpwhk82rtoh0n7ft01ess0kyk7d4qcm57321266liv7e9k6777uor44m8xs14kf0aufbtllpuosgzztusbdb8o5bkgug0o3z9nwbiffv0659kvbmvlulqayi25x3jazm7sfgiis64th1nn0ire81adzeikyu1mrjp5p9xwvfasth87xei319p7kec0o10x8dm3w4qm7gtkwe == \w\e\9\f\b\w\3\8\8\x\x\p\h\i\c\5\o\7\1\9\r\2\o\r\x\e\k\6\i\q\1\y\3\4\a\p\z\c\b\a\t\5\q\j\t\y\4\3\t\a\l\s\r\8\m\o\2\7\l\8\a\b\0\e\1\f\j\4\1\t\p\z\s\w\0\t\0\3\u\5\x\v\z\3\5\9\x\o\v\w\3\e\r\4\j\g\h\6\6\p\3\a\g\n\o\p\c\u\8\b\i\e\0\w\1\l\j\i\l\6\f\w\0\9\l\g\6\d\b\k\f\i\0\4\9\z\q\e\3\n\d\j\d\n\x\8\p\d\g\q\4\c\1\a\a\g\b\1\7\w\b\v\1\p\s\d\z\s\9\x\n\5\v\s\a\t\c\t\3\9\e\3\f\2\c\q\l\8\j\r\k\a\w\k\s\s\y\j\p\r\a\x\y\3\m\c\e\6\v\d\5\9\z\e\3\h\y\p\2\4\9\b\l\x\y\z\a\j\1\v\i\5\m\k\0\r\y\j\y\e\f\8\o\t\0\1\5\z\o\k\r\p\8\3\t\n\f\a\4\z\f\p\d\s\e\u\7\6\x\o\q\p\1\j\0\z\y\h\k\i\y\w\v\d\b\t\z\e\t\j\u\4\f\z\d\z\p\l\n\s\b\j\3\r\f\s\i\h\l\p\w\h\k\8\2\r\t\o\h\0\n\7\f\t\0\1\e\s\s\0\k\y\k\7\d\4\q\c\m\5\7\3\2\1\2\6\6\l\i\v\7\e\9\k\6\7\7\7\u\o\r\4\4\m\8\x\s\1\4\k\f\0\a\u\f\b\t\l\l\p\u\o\s\g\z\z\t\u\s\b\d\b\8\o\5\b\k\g\u\g\0\o\3\z\9\n\w\b\i\f\f\v\0\6\5\9\k\v\b\m\v\l\u\l\q\a\y\i\2\5\x\3\j\a\z\m\7\s\f\g\i\i\s\6\4\t\h\1\n\n\0\i\r\e\8\1\a\d\z\e\i\k\y\u\1\m\r\j\p\5\p\9\x\w\v\f\a\s\t\h\8\7\x\e\i\3\1\9\p\7\k\e\c\0\o\1\0\x\8\d\m\3\w\4\q\m\7\g\t\k\w\e ]] 00:08:19.222 23:52:25 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:19.222 23:52:25 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:19.222 [2024-11-18 23:52:25.781692] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:08:19.223 [2024-11-18 23:52:25.781892] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62347 ] 00:08:19.481 [2024-11-18 23:52:25.962493] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.481 [2024-11-18 23:52:26.061853] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.740 [2024-11-18 23:52:26.218953] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:19.740  [2024-11-18T23:52:27.368Z] Copying: 512/512 [B] (average 166 kBps) 00:08:20.676 00:08:20.676 23:52:27 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ we9fbw388xxphic5o719r2orxek6iq1y34apzcbat5qjty43talsr8mo27l8ab0e1fj41tpzsw0t03u5xvz359xovw3er4jgh66p3agnopcu8bie0w1ljil6fw09lg6dbkfi049zqe3ndjdnx8pdgq4c1aagb17wbv1psdzs9xn5vsatct39e3f2cql8jrkawkssyjpraxy3mce6vd59ze3hyp249blxyzaj1vi5mk0ryjyef8ot015zokrp83tnfa4zfpdseu76xoqp1j0zyhkiywvdbtzetju4fzdzplnsbj3rfsihlpwhk82rtoh0n7ft01ess0kyk7d4qcm57321266liv7e9k6777uor44m8xs14kf0aufbtllpuosgzztusbdb8o5bkgug0o3z9nwbiffv0659kvbmvlulqayi25x3jazm7sfgiis64th1nn0ire81adzeikyu1mrjp5p9xwvfasth87xei319p7kec0o10x8dm3w4qm7gtkwe == \w\e\9\f\b\w\3\8\8\x\x\p\h\i\c\5\o\7\1\9\r\2\o\r\x\e\k\6\i\q\1\y\3\4\a\p\z\c\b\a\t\5\q\j\t\y\4\3\t\a\l\s\r\8\m\o\2\7\l\8\a\b\0\e\1\f\j\4\1\t\p\z\s\w\0\t\0\3\u\5\x\v\z\3\5\9\x\o\v\w\3\e\r\4\j\g\h\6\6\p\3\a\g\n\o\p\c\u\8\b\i\e\0\w\1\l\j\i\l\6\f\w\0\9\l\g\6\d\b\k\f\i\0\4\9\z\q\e\3\n\d\j\d\n\x\8\p\d\g\q\4\c\1\a\a\g\b\1\7\w\b\v\1\p\s\d\z\s\9\x\n\5\v\s\a\t\c\t\3\9\e\3\f\2\c\q\l\8\j\r\k\a\w\k\s\s\y\j\p\r\a\x\y\3\m\c\e\6\v\d\5\9\z\e\3\h\y\p\2\4\9\b\l\x\y\z\a\j\1\v\i\5\m\k\0\r\y\j\y\e\f\8\o\t\0\1\5\z\o\k\r\p\8\3\t\n\f\a\4\z\f\p\d\s\e\u\7\6\x\o\q\p\1\j\0\z\y\h\k\i\y\w\v\d\b\t\z\e\t\j\u\4\f\z\d\z\p\l\n\s\b\j\3\r\f\s\i\h\l\p\w\h\k\8\2\r\t\o\h\0\n\7\f\t\0\1\e\s\s\0\k\y\k\7\d\4\q\c\m\5\7\3\2\1\2\6\6\l\i\v\7\e\9\k\6\7\7\7\u\o\r\4\4\m\8\x\s\1\4\k\f\0\a\u\f\b\t\l\l\p\u\o\s\g\z\z\t\u\s\b\d\b\8\o\5\b\k\g\u\g\0\o\3\z\9\n\w\b\i\f\f\v\0\6\5\9\k\v\b\m\v\l\u\l\q\a\y\i\2\5\x\3\j\a\z\m\7\s\f\g\i\i\s\6\4\t\h\1\n\n\0\i\r\e\8\1\a\d\z\e\i\k\y\u\1\m\r\j\p\5\p\9\x\w\v\f\a\s\t\h\8\7\x\e\i\3\1\9\p\7\k\e\c\0\o\1\0\x\8\d\m\3\w\4\q\m\7\g\t\k\w\e ]] 00:08:20.676 23:52:27 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:20.676 23:52:27 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:08:20.676 23:52:27 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:20.676 23:52:27 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:20.676 23:52:27 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:20.676 23:52:27 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:20.676 [2024-11-18 23:52:27.291509] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:08:20.676 [2024-11-18 23:52:27.292013] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62367 ] 00:08:20.934 [2024-11-18 23:52:27.470578] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.934 [2024-11-18 23:52:27.552170] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.192 [2024-11-18 23:52:27.705579] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:21.192  [2024-11-18T23:52:28.822Z] Copying: 512/512 [B] (average 500 kBps) 00:08:22.130 00:08:22.130 23:52:28 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ wuohdn4sp2b92xx1hrw2xarwrwzk8mvtb0fsy07eezmc7xjenolm61dvwpwrrmr4szz0nvckk2uquba6fkjlwmuusuan337tvqe0t6j0h0kwjsjwl44watw8xq1x3pktonjc76h8zcqalj4ryk9faeh9ned3h86hyyn5v16nar6cru99158ax342nzr9dfjrndjavqwn63wr2fxs5dmoj3yibsgcsao4j3yrrrexldrgmj7k8rsni9b4gzrqirrzj19pom1m1yz5wahw31resunfzg3eudbl4px0vhmp6nl4xqpad9hqvk0s5h3pjt5f6zf0vitxcuy1a2xxdaeohd0ixmhucwnlkv5dcll8d8x0w7xi3veer8lgcuj0csnvnazkwdiiro6ab6prbekryehrj40iormj6eq0moxsclx9rfgwxcxjkpyldt3o0a3s0ft5dyrbk233794t7jqb1n7c0047p3dckly7w8f552ehgsrtxoa3yj29q5gfbk1t == \w\u\o\h\d\n\4\s\p\2\b\9\2\x\x\1\h\r\w\2\x\a\r\w\r\w\z\k\8\m\v\t\b\0\f\s\y\0\7\e\e\z\m\c\7\x\j\e\n\o\l\m\6\1\d\v\w\p\w\r\r\m\r\4\s\z\z\0\n\v\c\k\k\2\u\q\u\b\a\6\f\k\j\l\w\m\u\u\s\u\a\n\3\3\7\t\v\q\e\0\t\6\j\0\h\0\k\w\j\s\j\w\l\4\4\w\a\t\w\8\x\q\1\x\3\p\k\t\o\n\j\c\7\6\h\8\z\c\q\a\l\j\4\r\y\k\9\f\a\e\h\9\n\e\d\3\h\8\6\h\y\y\n\5\v\1\6\n\a\r\6\c\r\u\9\9\1\5\8\a\x\3\4\2\n\z\r\9\d\f\j\r\n\d\j\a\v\q\w\n\6\3\w\r\2\f\x\s\5\d\m\o\j\3\y\i\b\s\g\c\s\a\o\4\j\3\y\r\r\r\e\x\l\d\r\g\m\j\7\k\8\r\s\n\i\9\b\4\g\z\r\q\i\r\r\z\j\1\9\p\o\m\1\m\1\y\z\5\w\a\h\w\3\1\r\e\s\u\n\f\z\g\3\e\u\d\b\l\4\p\x\0\v\h\m\p\6\n\l\4\x\q\p\a\d\9\h\q\v\k\0\s\5\h\3\p\j\t\5\f\6\z\f\0\v\i\t\x\c\u\y\1\a\2\x\x\d\a\e\o\h\d\0\i\x\m\h\u\c\w\n\l\k\v\5\d\c\l\l\8\d\8\x\0\w\7\x\i\3\v\e\e\r\8\l\g\c\u\j\0\c\s\n\v\n\a\z\k\w\d\i\i\r\o\6\a\b\6\p\r\b\e\k\r\y\e\h\r\j\4\0\i\o\r\m\j\6\e\q\0\m\o\x\s\c\l\x\9\r\f\g\w\x\c\x\j\k\p\y\l\d\t\3\o\0\a\3\s\0\f\t\5\d\y\r\b\k\2\3\3\7\9\4\t\7\j\q\b\1\n\7\c\0\0\4\7\p\3\d\c\k\l\y\7\w\8\f\5\5\2\e\h\g\s\r\t\x\o\a\3\y\j\2\9\q\5\g\f\b\k\1\t ]] 00:08:22.130 23:52:28 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:22.130 23:52:28 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:22.130 [2024-11-18 23:52:28.712447] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:08:22.130 [2024-11-18 23:52:28.712840] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62381 ] 00:08:22.389 [2024-11-18 23:52:28.890992] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.389 [2024-11-18 23:52:28.988739] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.648 [2024-11-18 23:52:29.165609] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:22.648  [2024-11-18T23:52:30.275Z] Copying: 512/512 [B] (average 500 kBps) 00:08:23.583 00:08:23.583 23:52:30 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ wuohdn4sp2b92xx1hrw2xarwrwzk8mvtb0fsy07eezmc7xjenolm61dvwpwrrmr4szz0nvckk2uquba6fkjlwmuusuan337tvqe0t6j0h0kwjsjwl44watw8xq1x3pktonjc76h8zcqalj4ryk9faeh9ned3h86hyyn5v16nar6cru99158ax342nzr9dfjrndjavqwn63wr2fxs5dmoj3yibsgcsao4j3yrrrexldrgmj7k8rsni9b4gzrqirrzj19pom1m1yz5wahw31resunfzg3eudbl4px0vhmp6nl4xqpad9hqvk0s5h3pjt5f6zf0vitxcuy1a2xxdaeohd0ixmhucwnlkv5dcll8d8x0w7xi3veer8lgcuj0csnvnazkwdiiro6ab6prbekryehrj40iormj6eq0moxsclx9rfgwxcxjkpyldt3o0a3s0ft5dyrbk233794t7jqb1n7c0047p3dckly7w8f552ehgsrtxoa3yj29q5gfbk1t == \w\u\o\h\d\n\4\s\p\2\b\9\2\x\x\1\h\r\w\2\x\a\r\w\r\w\z\k\8\m\v\t\b\0\f\s\y\0\7\e\e\z\m\c\7\x\j\e\n\o\l\m\6\1\d\v\w\p\w\r\r\m\r\4\s\z\z\0\n\v\c\k\k\2\u\q\u\b\a\6\f\k\j\l\w\m\u\u\s\u\a\n\3\3\7\t\v\q\e\0\t\6\j\0\h\0\k\w\j\s\j\w\l\4\4\w\a\t\w\8\x\q\1\x\3\p\k\t\o\n\j\c\7\6\h\8\z\c\q\a\l\j\4\r\y\k\9\f\a\e\h\9\n\e\d\3\h\8\6\h\y\y\n\5\v\1\6\n\a\r\6\c\r\u\9\9\1\5\8\a\x\3\4\2\n\z\r\9\d\f\j\r\n\d\j\a\v\q\w\n\6\3\w\r\2\f\x\s\5\d\m\o\j\3\y\i\b\s\g\c\s\a\o\4\j\3\y\r\r\r\e\x\l\d\r\g\m\j\7\k\8\r\s\n\i\9\b\4\g\z\r\q\i\r\r\z\j\1\9\p\o\m\1\m\1\y\z\5\w\a\h\w\3\1\r\e\s\u\n\f\z\g\3\e\u\d\b\l\4\p\x\0\v\h\m\p\6\n\l\4\x\q\p\a\d\9\h\q\v\k\0\s\5\h\3\p\j\t\5\f\6\z\f\0\v\i\t\x\c\u\y\1\a\2\x\x\d\a\e\o\h\d\0\i\x\m\h\u\c\w\n\l\k\v\5\d\c\l\l\8\d\8\x\0\w\7\x\i\3\v\e\e\r\8\l\g\c\u\j\0\c\s\n\v\n\a\z\k\w\d\i\i\r\o\6\a\b\6\p\r\b\e\k\r\y\e\h\r\j\4\0\i\o\r\m\j\6\e\q\0\m\o\x\s\c\l\x\9\r\f\g\w\x\c\x\j\k\p\y\l\d\t\3\o\0\a\3\s\0\f\t\5\d\y\r\b\k\2\3\3\7\9\4\t\7\j\q\b\1\n\7\c\0\0\4\7\p\3\d\c\k\l\y\7\w\8\f\5\5\2\e\h\g\s\r\t\x\o\a\3\y\j\2\9\q\5\g\f\b\k\1\t ]] 00:08:23.583 23:52:30 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:23.583 23:52:30 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:23.583 [2024-11-18 23:52:30.207364] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:08:23.583 [2024-11-18 23:52:30.207544] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62406 ] 00:08:23.842 [2024-11-18 23:52:30.383120] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.842 [2024-11-18 23:52:30.471689] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.100 [2024-11-18 23:52:30.622503] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:24.100  [2024-11-18T23:52:31.729Z] Copying: 512/512 [B] (average 500 kBps) 00:08:25.037 00:08:25.037 23:52:31 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ wuohdn4sp2b92xx1hrw2xarwrwzk8mvtb0fsy07eezmc7xjenolm61dvwpwrrmr4szz0nvckk2uquba6fkjlwmuusuan337tvqe0t6j0h0kwjsjwl44watw8xq1x3pktonjc76h8zcqalj4ryk9faeh9ned3h86hyyn5v16nar6cru99158ax342nzr9dfjrndjavqwn63wr2fxs5dmoj3yibsgcsao4j3yrrrexldrgmj7k8rsni9b4gzrqirrzj19pom1m1yz5wahw31resunfzg3eudbl4px0vhmp6nl4xqpad9hqvk0s5h3pjt5f6zf0vitxcuy1a2xxdaeohd0ixmhucwnlkv5dcll8d8x0w7xi3veer8lgcuj0csnvnazkwdiiro6ab6prbekryehrj40iormj6eq0moxsclx9rfgwxcxjkpyldt3o0a3s0ft5dyrbk233794t7jqb1n7c0047p3dckly7w8f552ehgsrtxoa3yj29q5gfbk1t == \w\u\o\h\d\n\4\s\p\2\b\9\2\x\x\1\h\r\w\2\x\a\r\w\r\w\z\k\8\m\v\t\b\0\f\s\y\0\7\e\e\z\m\c\7\x\j\e\n\o\l\m\6\1\d\v\w\p\w\r\r\m\r\4\s\z\z\0\n\v\c\k\k\2\u\q\u\b\a\6\f\k\j\l\w\m\u\u\s\u\a\n\3\3\7\t\v\q\e\0\t\6\j\0\h\0\k\w\j\s\j\w\l\4\4\w\a\t\w\8\x\q\1\x\3\p\k\t\o\n\j\c\7\6\h\8\z\c\q\a\l\j\4\r\y\k\9\f\a\e\h\9\n\e\d\3\h\8\6\h\y\y\n\5\v\1\6\n\a\r\6\c\r\u\9\9\1\5\8\a\x\3\4\2\n\z\r\9\d\f\j\r\n\d\j\a\v\q\w\n\6\3\w\r\2\f\x\s\5\d\m\o\j\3\y\i\b\s\g\c\s\a\o\4\j\3\y\r\r\r\e\x\l\d\r\g\m\j\7\k\8\r\s\n\i\9\b\4\g\z\r\q\i\r\r\z\j\1\9\p\o\m\1\m\1\y\z\5\w\a\h\w\3\1\r\e\s\u\n\f\z\g\3\e\u\d\b\l\4\p\x\0\v\h\m\p\6\n\l\4\x\q\p\a\d\9\h\q\v\k\0\s\5\h\3\p\j\t\5\f\6\z\f\0\v\i\t\x\c\u\y\1\a\2\x\x\d\a\e\o\h\d\0\i\x\m\h\u\c\w\n\l\k\v\5\d\c\l\l\8\d\8\x\0\w\7\x\i\3\v\e\e\r\8\l\g\c\u\j\0\c\s\n\v\n\a\z\k\w\d\i\i\r\o\6\a\b\6\p\r\b\e\k\r\y\e\h\r\j\4\0\i\o\r\m\j\6\e\q\0\m\o\x\s\c\l\x\9\r\f\g\w\x\c\x\j\k\p\y\l\d\t\3\o\0\a\3\s\0\f\t\5\d\y\r\b\k\2\3\3\7\9\4\t\7\j\q\b\1\n\7\c\0\0\4\7\p\3\d\c\k\l\y\7\w\8\f\5\5\2\e\h\g\s\r\t\x\o\a\3\y\j\2\9\q\5\g\f\b\k\1\t ]] 00:08:25.037 23:52:31 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:25.037 23:52:31 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:25.037 [2024-11-18 23:52:31.660352] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:08:25.037 [2024-11-18 23:52:31.660558] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62420 ] 00:08:25.296 [2024-11-18 23:52:31.838725] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.296 [2024-11-18 23:52:31.926687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.555 [2024-11-18 23:52:32.086493] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:25.555  [2024-11-18T23:52:33.185Z] Copying: 512/512 [B] (average 125 kBps) 00:08:26.493 00:08:26.493 23:52:33 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ wuohdn4sp2b92xx1hrw2xarwrwzk8mvtb0fsy07eezmc7xjenolm61dvwpwrrmr4szz0nvckk2uquba6fkjlwmuusuan337tvqe0t6j0h0kwjsjwl44watw8xq1x3pktonjc76h8zcqalj4ryk9faeh9ned3h86hyyn5v16nar6cru99158ax342nzr9dfjrndjavqwn63wr2fxs5dmoj3yibsgcsao4j3yrrrexldrgmj7k8rsni9b4gzrqirrzj19pom1m1yz5wahw31resunfzg3eudbl4px0vhmp6nl4xqpad9hqvk0s5h3pjt5f6zf0vitxcuy1a2xxdaeohd0ixmhucwnlkv5dcll8d8x0w7xi3veer8lgcuj0csnvnazkwdiiro6ab6prbekryehrj40iormj6eq0moxsclx9rfgwxcxjkpyldt3o0a3s0ft5dyrbk233794t7jqb1n7c0047p3dckly7w8f552ehgsrtxoa3yj29q5gfbk1t == \w\u\o\h\d\n\4\s\p\2\b\9\2\x\x\1\h\r\w\2\x\a\r\w\r\w\z\k\8\m\v\t\b\0\f\s\y\0\7\e\e\z\m\c\7\x\j\e\n\o\l\m\6\1\d\v\w\p\w\r\r\m\r\4\s\z\z\0\n\v\c\k\k\2\u\q\u\b\a\6\f\k\j\l\w\m\u\u\s\u\a\n\3\3\7\t\v\q\e\0\t\6\j\0\h\0\k\w\j\s\j\w\l\4\4\w\a\t\w\8\x\q\1\x\3\p\k\t\o\n\j\c\7\6\h\8\z\c\q\a\l\j\4\r\y\k\9\f\a\e\h\9\n\e\d\3\h\8\6\h\y\y\n\5\v\1\6\n\a\r\6\c\r\u\9\9\1\5\8\a\x\3\4\2\n\z\r\9\d\f\j\r\n\d\j\a\v\q\w\n\6\3\w\r\2\f\x\s\5\d\m\o\j\3\y\i\b\s\g\c\s\a\o\4\j\3\y\r\r\r\e\x\l\d\r\g\m\j\7\k\8\r\s\n\i\9\b\4\g\z\r\q\i\r\r\z\j\1\9\p\o\m\1\m\1\y\z\5\w\a\h\w\3\1\r\e\s\u\n\f\z\g\3\e\u\d\b\l\4\p\x\0\v\h\m\p\6\n\l\4\x\q\p\a\d\9\h\q\v\k\0\s\5\h\3\p\j\t\5\f\6\z\f\0\v\i\t\x\c\u\y\1\a\2\x\x\d\a\e\o\h\d\0\i\x\m\h\u\c\w\n\l\k\v\5\d\c\l\l\8\d\8\x\0\w\7\x\i\3\v\e\e\r\8\l\g\c\u\j\0\c\s\n\v\n\a\z\k\w\d\i\i\r\o\6\a\b\6\p\r\b\e\k\r\y\e\h\r\j\4\0\i\o\r\m\j\6\e\q\0\m\o\x\s\c\l\x\9\r\f\g\w\x\c\x\j\k\p\y\l\d\t\3\o\0\a\3\s\0\f\t\5\d\y\r\b\k\2\3\3\7\9\4\t\7\j\q\b\1\n\7\c\0\0\4\7\p\3\d\c\k\l\y\7\w\8\f\5\5\2\e\h\g\s\r\t\x\o\a\3\y\j\2\9\q\5\g\f\b\k\1\t ]] 00:08:26.493 00:08:26.493 real 0m11.685s 00:08:26.493 user 0m9.188s 00:08:26.493 sys 0m1.495s 00:08:26.493 23:52:33 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:26.493 23:52:33 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:26.493 ************************************ 00:08:26.493 END TEST dd_flags_misc_forced_aio 00:08:26.493 ************************************ 00:08:26.493 23:52:33 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:08:26.493 23:52:33 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:26.493 23:52:33 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:26.493 ************************************ 00:08:26.493 END TEST spdk_dd_posix 00:08:26.493 ************************************ 00:08:26.493 00:08:26.493 real 0m50.806s 00:08:26.493 user 0m38.641s 00:08:26.493 sys 0m14.166s 00:08:26.493 23:52:33 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:26.493 23:52:33 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:26.493 23:52:33 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:08:26.493 23:52:33 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:26.493 23:52:33 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:26.493 23:52:33 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:26.493 ************************************ 00:08:26.493 START TEST spdk_dd_malloc 00:08:26.493 ************************************ 00:08:26.493 23:52:33 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:08:26.753 * Looking for test storage... 00:08:26.753 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:26.753 23:52:33 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:26.753 23:52:33 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1693 -- # lcov --version 00:08:26.753 23:52:33 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:26.753 23:52:33 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:26.753 23:52:33 spdk_dd.spdk_dd_malloc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:26.753 23:52:33 spdk_dd.spdk_dd_malloc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:26.753 23:52:33 spdk_dd.spdk_dd_malloc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:26.753 23:52:33 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # IFS=.-: 00:08:26.753 23:52:33 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # read -ra ver1 00:08:26.753 23:52:33 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # IFS=.-: 00:08:26.753 23:52:33 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # read -ra ver2 00:08:26.753 23:52:33 spdk_dd.spdk_dd_malloc -- scripts/common.sh@338 -- # local 'op=<' 00:08:26.754 23:52:33 spdk_dd.spdk_dd_malloc -- scripts/common.sh@340 -- # ver1_l=2 00:08:26.754 23:52:33 spdk_dd.spdk_dd_malloc -- scripts/common.sh@341 -- # ver2_l=1 00:08:26.754 23:52:33 spdk_dd.spdk_dd_malloc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:26.754 23:52:33 spdk_dd.spdk_dd_malloc -- scripts/common.sh@344 -- # case "$op" in 00:08:26.754 23:52:33 spdk_dd.spdk_dd_malloc -- scripts/common.sh@345 -- # : 1 00:08:26.754 23:52:33 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:26.754 23:52:33 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:26.754 23:52:33 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # decimal 1 00:08:26.754 23:52:33 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=1 00:08:26.754 23:52:33 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:26.754 23:52:33 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 1 00:08:26.754 23:52:33 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:26.754 23:52:33 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # decimal 2 00:08:26.754 23:52:33 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=2 00:08:26.754 23:52:33 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:26.754 23:52:33 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 2 00:08:26.754 23:52:33 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:26.754 23:52:33 spdk_dd.spdk_dd_malloc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:26.754 23:52:33 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:26.754 23:52:33 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # return 0 00:08:26.754 23:52:33 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:26.754 23:52:33 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:26.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.754 --rc genhtml_branch_coverage=1 00:08:26.754 --rc genhtml_function_coverage=1 00:08:26.754 --rc genhtml_legend=1 00:08:26.754 --rc geninfo_all_blocks=1 00:08:26.754 --rc geninfo_unexecuted_blocks=1 00:08:26.754 00:08:26.754 ' 00:08:26.754 23:52:33 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:26.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.754 --rc genhtml_branch_coverage=1 00:08:26.754 --rc genhtml_function_coverage=1 00:08:26.754 --rc genhtml_legend=1 00:08:26.754 --rc geninfo_all_blocks=1 00:08:26.754 --rc geninfo_unexecuted_blocks=1 00:08:26.754 00:08:26.754 ' 00:08:26.754 23:52:33 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:26.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.754 --rc genhtml_branch_coverage=1 00:08:26.754 --rc genhtml_function_coverage=1 00:08:26.754 --rc genhtml_legend=1 00:08:26.754 --rc geninfo_all_blocks=1 00:08:26.754 --rc geninfo_unexecuted_blocks=1 00:08:26.754 00:08:26.754 ' 00:08:26.754 23:52:33 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:26.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.754 --rc genhtml_branch_coverage=1 00:08:26.754 --rc genhtml_function_coverage=1 00:08:26.754 --rc genhtml_legend=1 00:08:26.754 --rc geninfo_all_blocks=1 00:08:26.754 --rc geninfo_unexecuted_blocks=1 00:08:26.754 00:08:26.754 ' 00:08:26.754 23:52:33 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:26.754 23:52:33 spdk_dd.spdk_dd_malloc -- scripts/common.sh@15 -- # shopt -s extglob 00:08:26.754 23:52:33 spdk_dd.spdk_dd_malloc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:26.754 23:52:33 spdk_dd.spdk_dd_malloc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:26.754 23:52:33 spdk_dd.spdk_dd_malloc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:26.754 23:52:33 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.754 23:52:33 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.754 23:52:33 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.754 23:52:33 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:08:26.754 23:52:33 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.754 23:52:33 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:08:26.754 23:52:33 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:26.754 23:52:33 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:26.754 23:52:33 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:08:26.754 ************************************ 00:08:26.754 START TEST dd_malloc_copy 00:08:26.754 ************************************ 00:08:26.754 23:52:33 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1129 -- # malloc_copy 00:08:26.754 23:52:33 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:08:26.754 23:52:33 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:08:26.754 23:52:33 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:08:26.754 23:52:33 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:08:26.754 23:52:33 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:08:26.754 23:52:33 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:08:26.754 23:52:33 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:08:26.754 23:52:33 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:08:26.754 23:52:33 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:26.754 23:52:33 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:08:26.754 { 00:08:26.754 "subsystems": [ 00:08:26.754 { 00:08:26.754 "subsystem": "bdev", 00:08:26.754 "config": [ 00:08:26.754 { 00:08:26.754 "params": { 00:08:26.754 "block_size": 512, 00:08:26.754 "num_blocks": 1048576, 00:08:26.754 "name": "malloc0" 00:08:26.754 }, 00:08:26.754 "method": "bdev_malloc_create" 00:08:26.754 }, 00:08:26.754 { 00:08:26.754 "params": { 00:08:26.754 "block_size": 512, 00:08:26.754 "num_blocks": 1048576, 00:08:26.754 "name": "malloc1" 00:08:26.754 }, 00:08:26.754 "method": "bdev_malloc_create" 00:08:26.755 }, 00:08:26.755 { 00:08:26.755 "method": "bdev_wait_for_examine" 00:08:26.755 } 00:08:26.755 ] 00:08:26.755 } 00:08:26.755 ] 00:08:26.755 } 00:08:27.014 [2024-11-18 23:52:33.445877] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:08:27.014 [2024-11-18 23:52:33.446849] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62514 ] 00:08:27.014 [2024-11-18 23:52:33.627693] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.273 [2024-11-18 23:52:33.717011] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.273 [2024-11-18 23:52:33.871020] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:29.190  [2024-11-18T23:52:36.832Z] Copying: 174/512 [MB] (174 MBps) [2024-11-18T23:52:37.769Z] Copying: 351/512 [MB] (177 MBps) [2024-11-18T23:52:41.058Z] Copying: 512/512 [MB] (average 179 MBps) 00:08:34.366 00:08:34.366 23:52:40 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:08:34.366 23:52:40 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:08:34.366 23:52:40 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:34.366 23:52:40 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:08:34.366 { 00:08:34.366 "subsystems": [ 00:08:34.366 { 00:08:34.366 "subsystem": "bdev", 00:08:34.366 "config": [ 00:08:34.366 { 00:08:34.366 "params": { 00:08:34.366 "block_size": 512, 00:08:34.366 "num_blocks": 1048576, 00:08:34.366 "name": "malloc0" 00:08:34.366 }, 00:08:34.366 "method": "bdev_malloc_create" 00:08:34.366 }, 00:08:34.366 { 00:08:34.366 "params": { 00:08:34.366 "block_size": 512, 00:08:34.366 "num_blocks": 1048576, 00:08:34.366 "name": "malloc1" 00:08:34.366 }, 00:08:34.366 "method": "bdev_malloc_create" 00:08:34.366 }, 00:08:34.366 { 00:08:34.366 "method": "bdev_wait_for_examine" 00:08:34.366 } 00:08:34.366 ] 00:08:34.366 } 00:08:34.366 ] 00:08:34.366 } 00:08:34.366 [2024-11-18 23:52:40.645595] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:08:34.366 [2024-11-18 23:52:40.645749] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62601 ] 00:08:34.366 [2024-11-18 23:52:40.812928] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.366 [2024-11-18 23:52:40.899527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.625 [2024-11-18 23:52:41.060967] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:36.529  [2024-11-18T23:52:44.158Z] Copying: 180/512 [MB] (180 MBps) [2024-11-18T23:52:44.726Z] Copying: 370/512 [MB] (190 MBps) [2024-11-18T23:52:48.015Z] Copying: 512/512 [MB] (average 187 MBps) 00:08:41.323 00:08:41.323 ************************************ 00:08:41.323 END TEST dd_malloc_copy 00:08:41.323 ************************************ 00:08:41.323 00:08:41.323 real 0m14.194s 00:08:41.323 user 0m13.219s 00:08:41.323 sys 0m0.781s 00:08:41.323 23:52:47 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:41.323 23:52:47 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:08:41.323 ************************************ 00:08:41.323 END TEST spdk_dd_malloc 00:08:41.323 ************************************ 00:08:41.323 00:08:41.323 real 0m14.438s 00:08:41.323 user 0m13.350s 00:08:41.323 sys 0m0.892s 00:08:41.323 23:52:47 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:41.323 23:52:47 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:08:41.323 23:52:47 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:08:41.323 23:52:47 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:41.323 23:52:47 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:41.323 23:52:47 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:41.323 ************************************ 00:08:41.323 START TEST spdk_dd_bdev_to_bdev 00:08:41.323 ************************************ 00:08:41.324 23:52:47 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:08:41.324 * Looking for test storage... 00:08:41.324 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:41.324 23:52:47 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:41.324 23:52:47 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1693 -- # lcov --version 00:08:41.324 23:52:47 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:41.324 23:52:47 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:41.324 23:52:47 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:41.324 23:52:47 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:41.324 23:52:47 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:41.324 23:52:47 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # IFS=.-: 00:08:41.324 23:52:47 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # read -ra ver1 00:08:41.324 23:52:47 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # IFS=.-: 00:08:41.324 23:52:47 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # read -ra ver2 00:08:41.324 23:52:47 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@338 -- # local 'op=<' 00:08:41.324 23:52:47 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@340 -- # ver1_l=2 00:08:41.324 23:52:47 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@341 -- # ver2_l=1 00:08:41.324 23:52:47 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:41.324 23:52:47 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@344 -- # case "$op" in 00:08:41.324 23:52:47 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@345 -- # : 1 00:08:41.324 23:52:47 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:41.324 23:52:47 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:41.324 23:52:47 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # decimal 1 00:08:41.324 23:52:47 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=1 00:08:41.324 23:52:47 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:41.324 23:52:47 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 1 00:08:41.324 23:52:47 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # ver1[v]=1 00:08:41.324 23:52:47 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # decimal 2 00:08:41.324 23:52:47 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=2 00:08:41.324 23:52:47 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:41.324 23:52:47 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 2 00:08:41.324 23:52:47 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # ver2[v]=2 00:08:41.324 23:52:47 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:41.324 23:52:47 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:41.324 23:52:47 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # return 0 00:08:41.324 23:52:47 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:41.324 23:52:47 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:41.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.324 --rc genhtml_branch_coverage=1 00:08:41.324 --rc genhtml_function_coverage=1 00:08:41.324 --rc genhtml_legend=1 00:08:41.324 --rc geninfo_all_blocks=1 00:08:41.324 --rc geninfo_unexecuted_blocks=1 00:08:41.324 00:08:41.324 ' 00:08:41.324 23:52:47 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:41.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.324 --rc genhtml_branch_coverage=1 00:08:41.324 --rc genhtml_function_coverage=1 00:08:41.324 --rc genhtml_legend=1 00:08:41.324 --rc geninfo_all_blocks=1 00:08:41.324 --rc geninfo_unexecuted_blocks=1 00:08:41.324 00:08:41.324 ' 00:08:41.324 23:52:47 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:41.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.324 --rc genhtml_branch_coverage=1 00:08:41.324 --rc genhtml_function_coverage=1 00:08:41.324 --rc genhtml_legend=1 00:08:41.324 --rc geninfo_all_blocks=1 00:08:41.324 --rc geninfo_unexecuted_blocks=1 00:08:41.324 00:08:41.324 ' 00:08:41.324 23:52:47 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:41.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.324 --rc genhtml_branch_coverage=1 00:08:41.324 --rc genhtml_function_coverage=1 00:08:41.324 --rc genhtml_legend=1 00:08:41.324 --rc geninfo_all_blocks=1 00:08:41.324 --rc geninfo_unexecuted_blocks=1 00:08:41.324 00:08:41.324 ' 00:08:41.324 23:52:47 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:41.324 23:52:47 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@15 -- # shopt -s extglob 00:08:41.324 23:52:47 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:41.324 23:52:47 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:41.324 23:52:47 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:41.324 23:52:47 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.324 23:52:47 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.324 23:52:47 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.324 23:52:47 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:08:41.324 23:52:47 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.324 23:52:47 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:08:41.324 23:52:47 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:08:41.324 23:52:47 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:08:41.324 23:52:47 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:08:41.324 23:52:47 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:08:41.324 23:52:47 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:08:41.324 23:52:47 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:08:41.324 23:52:47 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:08:41.324 23:52:47 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:08:41.324 23:52:47 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:08:41.324 23:52:47 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:08:41.324 23:52:47 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:08:41.324 23:52:47 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:08:41.324 23:52:47 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:08:41.324 23:52:47 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:41.324 23:52:47 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:41.325 23:52:47 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:08:41.325 23:52:47 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:08:41.325 23:52:47 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:08:41.325 23:52:47 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:08:41.325 23:52:47 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:41.325 23:52:47 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:41.325 ************************************ 00:08:41.325 START TEST dd_inflate_file 00:08:41.325 ************************************ 00:08:41.325 23:52:47 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:08:41.325 [2024-11-18 23:52:47.935103] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:08:41.325 [2024-11-18 23:52:47.935297] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62748 ] 00:08:41.584 [2024-11-18 23:52:48.114283] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.584 [2024-11-18 23:52:48.196971] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.843 [2024-11-18 23:52:48.343057] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:41.843  [2024-11-18T23:52:49.472Z] Copying: 64/64 [MB] (average 1684 MBps) 00:08:42.780 00:08:42.780 ************************************ 00:08:42.780 END TEST dd_inflate_file 00:08:42.780 ************************************ 00:08:42.780 00:08:42.780 real 0m1.478s 00:08:42.780 user 0m1.176s 00:08:42.780 sys 0m0.871s 00:08:42.780 23:52:49 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:42.780 23:52:49 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:08:42.780 23:52:49 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:08:42.780 23:52:49 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:08:42.780 23:52:49 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:08:42.780 23:52:49 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:08:42.780 23:52:49 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:42.780 23:52:49 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:08:42.780 23:52:49 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:42.780 23:52:49 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:08:42.780 23:52:49 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:42.780 ************************************ 00:08:42.780 START TEST dd_copy_to_out_bdev 00:08:42.780 ************************************ 00:08:42.780 23:52:49 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:08:42.780 { 00:08:42.780 "subsystems": [ 00:08:42.780 { 00:08:42.780 "subsystem": "bdev", 00:08:42.780 "config": [ 00:08:42.780 { 00:08:42.780 "params": { 00:08:42.780 "trtype": "pcie", 00:08:42.780 "traddr": "0000:00:10.0", 00:08:42.780 "name": "Nvme0" 00:08:42.780 }, 00:08:42.780 "method": "bdev_nvme_attach_controller" 00:08:42.780 }, 00:08:42.780 { 00:08:42.780 "params": { 00:08:42.780 "trtype": "pcie", 00:08:42.780 "traddr": "0000:00:11.0", 00:08:42.780 "name": "Nvme1" 00:08:42.780 }, 00:08:42.780 "method": "bdev_nvme_attach_controller" 00:08:42.780 }, 00:08:42.780 { 00:08:42.780 "method": "bdev_wait_for_examine" 00:08:42.780 } 00:08:42.780 ] 00:08:42.780 } 00:08:42.780 ] 00:08:42.780 } 00:08:43.039 [2024-11-18 23:52:49.496294] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:08:43.039 [2024-11-18 23:52:49.496781] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62793 ] 00:08:43.039 [2024-11-18 23:52:49.678162] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:43.298 [2024-11-18 23:52:49.770349] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.298 [2024-11-18 23:52:49.926204] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:44.701  [2024-11-18T23:52:51.650Z] Copying: 49/64 [MB] (49 MBps) [2024-11-18T23:52:52.586Z] Copying: 64/64 [MB] (average 49 MBps) 00:08:45.894 00:08:45.894 00:08:45.894 real 0m2.989s 00:08:45.894 user 0m2.710s 00:08:45.894 sys 0m2.199s 00:08:45.894 23:52:52 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:45.894 ************************************ 00:08:45.894 END TEST dd_copy_to_out_bdev 00:08:45.894 ************************************ 00:08:45.894 23:52:52 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:45.894 23:52:52 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:08:45.894 23:52:52 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:08:45.894 23:52:52 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:45.894 23:52:52 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:45.894 23:52:52 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:45.894 ************************************ 00:08:45.894 START TEST dd_offset_magic 00:08:45.894 ************************************ 00:08:45.894 23:52:52 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1129 -- # offset_magic 00:08:45.894 23:52:52 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:08:45.894 23:52:52 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:08:45.894 23:52:52 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:08:45.894 23:52:52 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:08:45.894 23:52:52 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:08:45.894 23:52:52 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:08:45.894 23:52:52 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:08:45.894 23:52:52 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:08:45.894 { 00:08:45.894 "subsystems": [ 00:08:45.894 { 00:08:45.894 "subsystem": "bdev", 00:08:45.894 "config": [ 00:08:45.894 { 00:08:45.894 "params": { 00:08:45.894 "trtype": "pcie", 00:08:45.894 "traddr": "0000:00:10.0", 00:08:45.894 "name": "Nvme0" 00:08:45.894 }, 00:08:45.894 "method": "bdev_nvme_attach_controller" 00:08:45.894 }, 00:08:45.894 { 00:08:45.894 "params": { 00:08:45.894 "trtype": "pcie", 00:08:45.894 "traddr": "0000:00:11.0", 00:08:45.894 "name": "Nvme1" 00:08:45.894 }, 00:08:45.894 "method": "bdev_nvme_attach_controller" 00:08:45.894 }, 00:08:45.894 { 00:08:45.894 "method": "bdev_wait_for_examine" 00:08:45.894 } 00:08:45.894 ] 00:08:45.894 } 00:08:45.894 ] 00:08:45.894 } 00:08:45.894 [2024-11-18 23:52:52.520877] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:08:45.894 [2024-11-18 23:52:52.521077] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62850 ] 00:08:46.153 [2024-11-18 23:52:52.704375] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.153 [2024-11-18 23:52:52.791065] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.412 [2024-11-18 23:52:52.943259] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:46.672  [2024-11-18T23:52:54.300Z] Copying: 65/65 [MB] (average 1000 MBps) 00:08:47.608 00:08:47.608 23:52:54 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:08:47.608 23:52:54 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:08:47.608 23:52:54 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:08:47.608 23:52:54 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:08:47.608 { 00:08:47.608 "subsystems": [ 00:08:47.608 { 00:08:47.608 "subsystem": "bdev", 00:08:47.608 "config": [ 00:08:47.608 { 00:08:47.608 "params": { 00:08:47.608 "trtype": "pcie", 00:08:47.608 "traddr": "0000:00:10.0", 00:08:47.608 "name": "Nvme0" 00:08:47.608 }, 00:08:47.608 "method": "bdev_nvme_attach_controller" 00:08:47.608 }, 00:08:47.608 { 00:08:47.608 "params": { 00:08:47.608 "trtype": "pcie", 00:08:47.608 "traddr": "0000:00:11.0", 00:08:47.609 "name": "Nvme1" 00:08:47.609 }, 00:08:47.609 "method": "bdev_nvme_attach_controller" 00:08:47.609 }, 00:08:47.609 { 00:08:47.609 "method": "bdev_wait_for_examine" 00:08:47.609 } 00:08:47.609 ] 00:08:47.609 } 00:08:47.609 ] 00:08:47.609 } 00:08:47.609 [2024-11-18 23:52:54.172855] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:08:47.609 [2024-11-18 23:52:54.173035] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62877 ] 00:08:47.868 [2024-11-18 23:52:54.350731] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.868 [2024-11-18 23:52:54.444472] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.127 [2024-11-18 23:52:54.601265] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:48.386  [2024-11-18T23:52:56.017Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:08:49.325 00:08:49.325 23:52:55 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:08:49.325 23:52:55 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:08:49.325 23:52:55 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:08:49.325 23:52:55 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:08:49.325 23:52:55 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:08:49.325 23:52:55 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:08:49.325 23:52:55 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:08:49.325 { 00:08:49.325 "subsystems": [ 00:08:49.325 { 00:08:49.325 "subsystem": "bdev", 00:08:49.325 "config": [ 00:08:49.325 { 00:08:49.325 "params": { 00:08:49.325 "trtype": "pcie", 00:08:49.325 "traddr": "0000:00:10.0", 00:08:49.325 "name": "Nvme0" 00:08:49.325 }, 00:08:49.325 "method": "bdev_nvme_attach_controller" 00:08:49.325 }, 00:08:49.325 { 00:08:49.325 "params": { 00:08:49.325 "trtype": "pcie", 00:08:49.325 "traddr": "0000:00:11.0", 00:08:49.325 "name": "Nvme1" 00:08:49.325 }, 00:08:49.325 "method": "bdev_nvme_attach_controller" 00:08:49.325 }, 00:08:49.325 { 00:08:49.325 "method": "bdev_wait_for_examine" 00:08:49.325 } 00:08:49.325 ] 00:08:49.325 } 00:08:49.325 ] 00:08:49.325 } 00:08:49.325 [2024-11-18 23:52:55.793380] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:08:49.325 [2024-11-18 23:52:55.793572] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62911 ] 00:08:49.325 [2024-11-18 23:52:55.974334] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:49.584 [2024-11-18 23:52:56.061241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.584 [2024-11-18 23:52:56.205366] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:49.844  [2024-11-18T23:52:57.472Z] Copying: 65/65 [MB] (average 1048 MBps) 00:08:50.780 00:08:50.780 23:52:57 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:08:50.780 23:52:57 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:08:50.780 23:52:57 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:08:50.780 23:52:57 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:08:50.780 { 00:08:50.780 "subsystems": [ 00:08:50.780 { 00:08:50.780 "subsystem": "bdev", 00:08:50.780 "config": [ 00:08:50.780 { 00:08:50.780 "params": { 00:08:50.780 "trtype": "pcie", 00:08:50.780 "traddr": "0000:00:10.0", 00:08:50.780 "name": "Nvme0" 00:08:50.780 }, 00:08:50.780 "method": "bdev_nvme_attach_controller" 00:08:50.780 }, 00:08:50.780 { 00:08:50.780 "params": { 00:08:50.780 "trtype": "pcie", 00:08:50.780 "traddr": "0000:00:11.0", 00:08:50.780 "name": "Nvme1" 00:08:50.780 }, 00:08:50.780 "method": "bdev_nvme_attach_controller" 00:08:50.780 }, 00:08:50.780 { 00:08:50.780 "method": "bdev_wait_for_examine" 00:08:50.780 } 00:08:50.780 ] 00:08:50.780 } 00:08:50.780 ] 00:08:50.780 } 00:08:50.780 [2024-11-18 23:52:57.369661] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:08:50.780 [2024-11-18 23:52:57.369849] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62932 ] 00:08:51.039 [2024-11-18 23:52:57.557923] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:51.039 [2024-11-18 23:52:57.654238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.297 [2024-11-18 23:52:57.822708] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:51.556  [2024-11-18T23:52:59.186Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:52.494 00:08:52.494 23:52:58 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:08:52.494 23:52:58 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:08:52.494 00:08:52.494 real 0m6.506s 00:08:52.494 user 0m5.438s 00:08:52.494 sys 0m2.121s 00:08:52.494 23:52:58 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:52.494 ************************************ 00:08:52.494 END TEST dd_offset_magic 00:08:52.494 ************************************ 00:08:52.494 23:52:58 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:08:52.494 23:52:58 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:08:52.494 23:52:58 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:08:52.494 23:52:58 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:52.494 23:52:58 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:08:52.494 23:52:58 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:08:52.494 23:52:58 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:08:52.494 23:52:58 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:08:52.494 23:52:58 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:08:52.494 23:52:58 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:08:52.494 23:52:58 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:08:52.494 23:52:58 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:52.494 { 00:08:52.494 "subsystems": [ 00:08:52.494 { 00:08:52.494 "subsystem": "bdev", 00:08:52.494 "config": [ 00:08:52.494 { 00:08:52.494 "params": { 00:08:52.494 "trtype": "pcie", 00:08:52.494 "traddr": "0000:00:10.0", 00:08:52.494 "name": "Nvme0" 00:08:52.494 }, 00:08:52.494 "method": "bdev_nvme_attach_controller" 00:08:52.494 }, 00:08:52.494 { 00:08:52.494 "params": { 00:08:52.494 "trtype": "pcie", 00:08:52.494 "traddr": "0000:00:11.0", 00:08:52.494 "name": "Nvme1" 00:08:52.494 }, 00:08:52.494 "method": "bdev_nvme_attach_controller" 00:08:52.494 }, 00:08:52.494 { 00:08:52.494 "method": "bdev_wait_for_examine" 00:08:52.494 } 00:08:52.494 ] 00:08:52.494 } 00:08:52.494 ] 00:08:52.494 } 00:08:52.494 [2024-11-18 23:52:59.061823] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:08:52.494 [2024-11-18 23:52:59.061984] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62981 ] 00:08:52.753 [2024-11-18 23:52:59.238916] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.753 [2024-11-18 23:52:59.320590] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.012 [2024-11-18 23:52:59.469144] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:53.012  [2024-11-18T23:53:00.641Z] Copying: 5120/5120 [kB] (average 1250 MBps) 00:08:53.949 00:08:53.949 23:53:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:08:53.949 23:53:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:08:53.949 23:53:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:08:53.949 23:53:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:08:53.949 23:53:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:08:53.949 23:53:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:08:53.949 23:53:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:08:53.949 23:53:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:08:53.949 23:53:00 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:08:53.949 23:53:00 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:53.949 { 00:08:53.949 "subsystems": [ 00:08:53.949 { 00:08:53.949 "subsystem": "bdev", 00:08:53.949 "config": [ 00:08:53.949 { 00:08:53.949 "params": { 00:08:53.949 "trtype": "pcie", 00:08:53.949 "traddr": "0000:00:10.0", 00:08:53.949 "name": "Nvme0" 00:08:53.949 }, 00:08:53.949 "method": "bdev_nvme_attach_controller" 00:08:53.949 }, 00:08:53.949 { 00:08:53.949 "params": { 00:08:53.949 "trtype": "pcie", 00:08:53.949 "traddr": "0000:00:11.0", 00:08:53.949 "name": "Nvme1" 00:08:53.949 }, 00:08:53.949 "method": "bdev_nvme_attach_controller" 00:08:53.949 }, 00:08:53.949 { 00:08:53.949 "method": "bdev_wait_for_examine" 00:08:53.949 } 00:08:53.949 ] 00:08:53.949 } 00:08:53.949 ] 00:08:53.949 } 00:08:53.949 [2024-11-18 23:53:00.498488] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:08:53.950 [2024-11-18 23:53:00.498682] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63003 ] 00:08:54.209 [2024-11-18 23:53:00.674164] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.209 [2024-11-18 23:53:00.755016] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.468 [2024-11-18 23:53:00.922395] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:54.468  [2024-11-18T23:53:02.099Z] Copying: 5120/5120 [kB] (average 833 MBps) 00:08:55.407 00:08:55.407 23:53:02 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:08:55.407 ************************************ 00:08:55.407 END TEST spdk_dd_bdev_to_bdev 00:08:55.407 ************************************ 00:08:55.407 00:08:55.407 real 0m14.461s 00:08:55.407 user 0m12.073s 00:08:55.407 sys 0m6.950s 00:08:55.407 23:53:02 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:55.407 23:53:02 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:55.666 23:53:02 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:08:55.666 23:53:02 spdk_dd -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:08:55.666 23:53:02 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:55.666 23:53:02 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:55.666 23:53:02 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:55.666 ************************************ 00:08:55.666 START TEST spdk_dd_uring 00:08:55.666 ************************************ 00:08:55.666 23:53:02 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:08:55.666 * Looking for test storage... 00:08:55.666 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:55.666 23:53:02 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:55.666 23:53:02 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:55.666 23:53:02 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1693 -- # lcov --version 00:08:55.666 23:53:02 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:55.666 23:53:02 spdk_dd.spdk_dd_uring -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:55.666 23:53:02 spdk_dd.spdk_dd_uring -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:55.666 23:53:02 spdk_dd.spdk_dd_uring -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:55.666 23:53:02 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # IFS=.-: 00:08:55.666 23:53:02 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # read -ra ver1 00:08:55.666 23:53:02 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # IFS=.-: 00:08:55.666 23:53:02 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # read -ra ver2 00:08:55.666 23:53:02 spdk_dd.spdk_dd_uring -- scripts/common.sh@338 -- # local 'op=<' 00:08:55.666 23:53:02 spdk_dd.spdk_dd_uring -- scripts/common.sh@340 -- # ver1_l=2 00:08:55.666 23:53:02 spdk_dd.spdk_dd_uring -- scripts/common.sh@341 -- # ver2_l=1 00:08:55.666 23:53:02 spdk_dd.spdk_dd_uring -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:55.666 23:53:02 spdk_dd.spdk_dd_uring -- scripts/common.sh@344 -- # case "$op" in 00:08:55.666 23:53:02 spdk_dd.spdk_dd_uring -- scripts/common.sh@345 -- # : 1 00:08:55.666 23:53:02 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:55.666 23:53:02 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:55.666 23:53:02 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # decimal 1 00:08:55.666 23:53:02 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=1 00:08:55.666 23:53:02 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:55.667 23:53:02 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 1 00:08:55.667 23:53:02 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # ver1[v]=1 00:08:55.667 23:53:02 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # decimal 2 00:08:55.667 23:53:02 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=2 00:08:55.667 23:53:02 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:55.667 23:53:02 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 2 00:08:55.667 23:53:02 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # ver2[v]=2 00:08:55.667 23:53:02 spdk_dd.spdk_dd_uring -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:55.667 23:53:02 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:55.667 23:53:02 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # return 0 00:08:55.667 23:53:02 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:55.667 23:53:02 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:55.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.667 --rc genhtml_branch_coverage=1 00:08:55.667 --rc genhtml_function_coverage=1 00:08:55.667 --rc genhtml_legend=1 00:08:55.667 --rc geninfo_all_blocks=1 00:08:55.667 --rc geninfo_unexecuted_blocks=1 00:08:55.667 00:08:55.667 ' 00:08:55.667 23:53:02 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:55.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.667 --rc genhtml_branch_coverage=1 00:08:55.667 --rc genhtml_function_coverage=1 00:08:55.667 --rc genhtml_legend=1 00:08:55.667 --rc geninfo_all_blocks=1 00:08:55.667 --rc geninfo_unexecuted_blocks=1 00:08:55.667 00:08:55.667 ' 00:08:55.667 23:53:02 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:55.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.667 --rc genhtml_branch_coverage=1 00:08:55.667 --rc genhtml_function_coverage=1 00:08:55.667 --rc genhtml_legend=1 00:08:55.667 --rc geninfo_all_blocks=1 00:08:55.667 --rc geninfo_unexecuted_blocks=1 00:08:55.667 00:08:55.667 ' 00:08:55.667 23:53:02 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:55.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.667 --rc genhtml_branch_coverage=1 00:08:55.667 --rc genhtml_function_coverage=1 00:08:55.667 --rc genhtml_legend=1 00:08:55.667 --rc geninfo_all_blocks=1 00:08:55.667 --rc geninfo_unexecuted_blocks=1 00:08:55.667 00:08:55.667 ' 00:08:55.667 23:53:02 spdk_dd.spdk_dd_uring -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:55.667 23:53:02 spdk_dd.spdk_dd_uring -- scripts/common.sh@15 -- # shopt -s extglob 00:08:55.667 23:53:02 spdk_dd.spdk_dd_uring -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:55.667 23:53:02 spdk_dd.spdk_dd_uring -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:55.667 23:53:02 spdk_dd.spdk_dd_uring -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:55.667 23:53:02 spdk_dd.spdk_dd_uring -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.667 23:53:02 spdk_dd.spdk_dd_uring -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.667 23:53:02 spdk_dd.spdk_dd_uring -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.667 23:53:02 spdk_dd.spdk_dd_uring -- paths/export.sh@5 -- # export PATH 00:08:55.667 23:53:02 spdk_dd.spdk_dd_uring -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.667 23:53:02 spdk_dd.spdk_dd_uring -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:08:55.667 23:53:02 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:55.667 23:53:02 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:55.667 23:53:02 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:08:55.667 ************************************ 00:08:55.667 START TEST dd_uring_copy 00:08:55.667 ************************************ 00:08:55.667 23:53:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1129 -- # uring_zram_copy 00:08:55.667 23:53:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@15 -- # local zram_dev_id 00:08:55.667 23:53:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@16 -- # local magic 00:08:55.667 23:53:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:08:55.667 23:53:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:08:55.667 23:53:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@19 -- # local verify_magic 00:08:55.667 23:53:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@21 -- # init_zram 00:08:55.667 23:53:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@159 -- # [[ -e /sys/class/zram-control ]] 00:08:55.667 23:53:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@160 -- # return 00:08:55.667 23:53:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # create_zram_dev 00:08:55.667 23:53:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@164 -- # cat /sys/class/zram-control/hot_add 00:08:55.667 23:53:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # zram_dev_id=1 00:08:55.667 23:53:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:08:55.667 23:53:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@177 -- # local id=1 00:08:55.667 23:53:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@178 -- # local size=512M 00:08:55.667 23:53:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@180 -- # [[ -e /sys/block/zram1 ]] 00:08:55.667 23:53:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@182 -- # echo 512M 00:08:55.667 23:53:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:08:55.667 23:53:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:08:55.667 23:53:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:08:55.667 23:53:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:08:55.667 23:53:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:08:55.667 23:53:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:08:55.667 23:53:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # gen_bytes 1024 00:08:55.667 23:53:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@98 -- # xtrace_disable 00:08:55.667 23:53:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:55.926 23:53:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # magic=twpx60fw0gnxunywjqmu0ao7p1opibjz5woppdyp1bg38twrh33l0cxk6smlxdjzan63kqeeavdtnoienhs1s86hzcbiiu9sy2op9lv7gfvdojy7is77fl1auqq6q2vm6ut27is87ruc0gd9np024xkl6u4r9udessc667imd7z3yddqczu0hnm32qk8fj8mxhsh9txibvgfksbbxd8jmnthpf7c8anjq7e6gjjnz6t8l0q8pi5xui9il6hdelyik2g4ejtt4uk1vsj03nl45vgsx5mfj79ce9mgbp7b6lnaxlm7qwpkpec8s7xtelqky65emoh60tj3q897cn8a8pae5yqwcnzyklgftghli106zemf16hdmp7t500bgok3aqjjpzu6lnhe506kyszowwf55bp9g9mv89yy9a4t38bepvg4gwehgn8i7w07uc674arrue20kw28lmhypj7wpx032a5hgo2fesz5tsw10lnczui4n0flsgb00q3lx89ac3xwfmkv7paisrpb585nvtj5l1hg7is94740nouw4t1uyqr3sbhe7iejefm6kn1inwusobv1b3ybqqg0urw5ega38ohgqrhzukye80auxnff2g5t70qjfqjlzbtcklmnw3k8dqib9l62bg6ihtp0z4pdtxqszvpsmboiekyhtldve7bnzo7w1y6sk7b0f7wyaq9u0jy6q7tuw8y8u4redy7knf2wh1tfaj1lerif8k0uysh6ecwogjwvfs873whmxedleu9e3wp2yi96qgqmoxhifb1ms3sitblidirfw90vmxwly3uggky1vrkiywpmovdna5r1klne9givk6imo9bhk23jxeiesy14xx8sp152p7jceui02y1e05omhxn6ajrxdjqchpzj03gqfmz4y9zsg2aa7f5timcd07p3axhvbqxd616f2uzza3w8yjl5zs0ajcmxya53pdllgqkxkapzhhworsgq88thfmiqcndc545e53p0a7e82kwazk75 00:08:55.926 23:53:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@42 -- # echo twpx60fw0gnxunywjqmu0ao7p1opibjz5woppdyp1bg38twrh33l0cxk6smlxdjzan63kqeeavdtnoienhs1s86hzcbiiu9sy2op9lv7gfvdojy7is77fl1auqq6q2vm6ut27is87ruc0gd9np024xkl6u4r9udessc667imd7z3yddqczu0hnm32qk8fj8mxhsh9txibvgfksbbxd8jmnthpf7c8anjq7e6gjjnz6t8l0q8pi5xui9il6hdelyik2g4ejtt4uk1vsj03nl45vgsx5mfj79ce9mgbp7b6lnaxlm7qwpkpec8s7xtelqky65emoh60tj3q897cn8a8pae5yqwcnzyklgftghli106zemf16hdmp7t500bgok3aqjjpzu6lnhe506kyszowwf55bp9g9mv89yy9a4t38bepvg4gwehgn8i7w07uc674arrue20kw28lmhypj7wpx032a5hgo2fesz5tsw10lnczui4n0flsgb00q3lx89ac3xwfmkv7paisrpb585nvtj5l1hg7is94740nouw4t1uyqr3sbhe7iejefm6kn1inwusobv1b3ybqqg0urw5ega38ohgqrhzukye80auxnff2g5t70qjfqjlzbtcklmnw3k8dqib9l62bg6ihtp0z4pdtxqszvpsmboiekyhtldve7bnzo7w1y6sk7b0f7wyaq9u0jy6q7tuw8y8u4redy7knf2wh1tfaj1lerif8k0uysh6ecwogjwvfs873whmxedleu9e3wp2yi96qgqmoxhifb1ms3sitblidirfw90vmxwly3uggky1vrkiywpmovdna5r1klne9givk6imo9bhk23jxeiesy14xx8sp152p7jceui02y1e05omhxn6ajrxdjqchpzj03gqfmz4y9zsg2aa7f5timcd07p3axhvbqxd616f2uzza3w8yjl5zs0ajcmxya53pdllgqkxkapzhhworsgq88thfmiqcndc545e53p0a7e82kwazk75 00:08:55.926 23:53:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:08:55.926 [2024-11-18 23:53:02.455644] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:08:55.926 [2024-11-18 23:53:02.455863] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63093 ] 00:08:56.185 [2024-11-18 23:53:02.636969] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:56.185 [2024-11-18 23:53:02.728262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.444 [2024-11-18 23:53:02.885325] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:57.386  [2024-11-18T23:53:05.984Z] Copying: 511/511 [MB] (average 1372 MBps) 00:08:59.292 00:08:59.292 23:53:05 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:08:59.292 23:53:05 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # gen_conf 00:08:59.292 23:53:05 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:59.292 23:53:05 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:59.292 { 00:08:59.292 "subsystems": [ 00:08:59.292 { 00:08:59.292 "subsystem": "bdev", 00:08:59.292 "config": [ 00:08:59.292 { 00:08:59.292 "params": { 00:08:59.292 "block_size": 512, 00:08:59.292 "num_blocks": 1048576, 00:08:59.292 "name": "malloc0" 00:08:59.292 }, 00:08:59.292 "method": "bdev_malloc_create" 00:08:59.292 }, 00:08:59.292 { 00:08:59.292 "params": { 00:08:59.292 "filename": "/dev/zram1", 00:08:59.292 "name": "uring0" 00:08:59.292 }, 00:08:59.292 "method": "bdev_uring_create" 00:08:59.292 }, 00:08:59.292 { 00:08:59.292 "method": "bdev_wait_for_examine" 00:08:59.292 } 00:08:59.292 ] 00:08:59.292 } 00:08:59.292 ] 00:08:59.292 } 00:08:59.292 [2024-11-18 23:53:05.728956] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:08:59.292 [2024-11-18 23:53:05.729170] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63132 ] 00:08:59.292 [2024-11-18 23:53:05.905479] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:59.551 [2024-11-18 23:53:06.003789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:59.551 [2024-11-18 23:53:06.153725] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:01.454  [2024-11-18T23:53:08.712Z] Copying: 194/512 [MB] (194 MBps) [2024-11-18T23:53:09.650Z] Copying: 386/512 [MB] (191 MBps) [2024-11-18T23:53:11.551Z] Copying: 512/512 [MB] (average 192 MBps) 00:09:04.859 00:09:04.859 23:53:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:09:04.859 23:53:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # gen_conf 00:09:04.859 23:53:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:09:04.859 23:53:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:09:04.859 { 00:09:04.859 "subsystems": [ 00:09:04.859 { 00:09:04.859 "subsystem": "bdev", 00:09:04.859 "config": [ 00:09:04.859 { 00:09:04.859 "params": { 00:09:04.859 "block_size": 512, 00:09:04.859 "num_blocks": 1048576, 00:09:04.859 "name": "malloc0" 00:09:04.859 }, 00:09:04.859 "method": "bdev_malloc_create" 00:09:04.859 }, 00:09:04.859 { 00:09:04.859 "params": { 00:09:04.859 "filename": "/dev/zram1", 00:09:04.859 "name": "uring0" 00:09:04.859 }, 00:09:04.859 "method": "bdev_uring_create" 00:09:04.859 }, 00:09:04.859 { 00:09:04.859 "method": "bdev_wait_for_examine" 00:09:04.859 } 00:09:04.859 ] 00:09:04.859 } 00:09:04.859 ] 00:09:04.859 } 00:09:04.859 [2024-11-18 23:53:11.367122] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:09:04.859 [2024-11-18 23:53:11.367314] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63205 ] 00:09:04.859 [2024-11-18 23:53:11.544440] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:05.118 [2024-11-18 23:53:11.639060] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:05.118 [2024-11-18 23:53:11.793114] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:07.018  [2024-11-18T23:53:14.644Z] Copying: 146/512 [MB] (146 MBps) [2024-11-18T23:53:15.580Z] Copying: 307/512 [MB] (161 MBps) [2024-11-18T23:53:15.838Z] Copying: 444/512 [MB] (137 MBps) [2024-11-18T23:53:17.739Z] Copying: 512/512 [MB] (average 148 MBps) 00:09:11.047 00:09:11.047 23:53:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:09:11.047 23:53:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@66 -- # [[ twpx60fw0gnxunywjqmu0ao7p1opibjz5woppdyp1bg38twrh33l0cxk6smlxdjzan63kqeeavdtnoienhs1s86hzcbiiu9sy2op9lv7gfvdojy7is77fl1auqq6q2vm6ut27is87ruc0gd9np024xkl6u4r9udessc667imd7z3yddqczu0hnm32qk8fj8mxhsh9txibvgfksbbxd8jmnthpf7c8anjq7e6gjjnz6t8l0q8pi5xui9il6hdelyik2g4ejtt4uk1vsj03nl45vgsx5mfj79ce9mgbp7b6lnaxlm7qwpkpec8s7xtelqky65emoh60tj3q897cn8a8pae5yqwcnzyklgftghli106zemf16hdmp7t500bgok3aqjjpzu6lnhe506kyszowwf55bp9g9mv89yy9a4t38bepvg4gwehgn8i7w07uc674arrue20kw28lmhypj7wpx032a5hgo2fesz5tsw10lnczui4n0flsgb00q3lx89ac3xwfmkv7paisrpb585nvtj5l1hg7is94740nouw4t1uyqr3sbhe7iejefm6kn1inwusobv1b3ybqqg0urw5ega38ohgqrhzukye80auxnff2g5t70qjfqjlzbtcklmnw3k8dqib9l62bg6ihtp0z4pdtxqszvpsmboiekyhtldve7bnzo7w1y6sk7b0f7wyaq9u0jy6q7tuw8y8u4redy7knf2wh1tfaj1lerif8k0uysh6ecwogjwvfs873whmxedleu9e3wp2yi96qgqmoxhifb1ms3sitblidirfw90vmxwly3uggky1vrkiywpmovdna5r1klne9givk6imo9bhk23jxeiesy14xx8sp152p7jceui02y1e05omhxn6ajrxdjqchpzj03gqfmz4y9zsg2aa7f5timcd07p3axhvbqxd616f2uzza3w8yjl5zs0ajcmxya53pdllgqkxkapzhhworsgq88thfmiqcndc545e53p0a7e82kwazk75 == \t\w\p\x\6\0\f\w\0\g\n\x\u\n\y\w\j\q\m\u\0\a\o\7\p\1\o\p\i\b\j\z\5\w\o\p\p\d\y\p\1\b\g\3\8\t\w\r\h\3\3\l\0\c\x\k\6\s\m\l\x\d\j\z\a\n\6\3\k\q\e\e\a\v\d\t\n\o\i\e\n\h\s\1\s\8\6\h\z\c\b\i\i\u\9\s\y\2\o\p\9\l\v\7\g\f\v\d\o\j\y\7\i\s\7\7\f\l\1\a\u\q\q\6\q\2\v\m\6\u\t\2\7\i\s\8\7\r\u\c\0\g\d\9\n\p\0\2\4\x\k\l\6\u\4\r\9\u\d\e\s\s\c\6\6\7\i\m\d\7\z\3\y\d\d\q\c\z\u\0\h\n\m\3\2\q\k\8\f\j\8\m\x\h\s\h\9\t\x\i\b\v\g\f\k\s\b\b\x\d\8\j\m\n\t\h\p\f\7\c\8\a\n\j\q\7\e\6\g\j\j\n\z\6\t\8\l\0\q\8\p\i\5\x\u\i\9\i\l\6\h\d\e\l\y\i\k\2\g\4\e\j\t\t\4\u\k\1\v\s\j\0\3\n\l\4\5\v\g\s\x\5\m\f\j\7\9\c\e\9\m\g\b\p\7\b\6\l\n\a\x\l\m\7\q\w\p\k\p\e\c\8\s\7\x\t\e\l\q\k\y\6\5\e\m\o\h\6\0\t\j\3\q\8\9\7\c\n\8\a\8\p\a\e\5\y\q\w\c\n\z\y\k\l\g\f\t\g\h\l\i\1\0\6\z\e\m\f\1\6\h\d\m\p\7\t\5\0\0\b\g\o\k\3\a\q\j\j\p\z\u\6\l\n\h\e\5\0\6\k\y\s\z\o\w\w\f\5\5\b\p\9\g\9\m\v\8\9\y\y\9\a\4\t\3\8\b\e\p\v\g\4\g\w\e\h\g\n\8\i\7\w\0\7\u\c\6\7\4\a\r\r\u\e\2\0\k\w\2\8\l\m\h\y\p\j\7\w\p\x\0\3\2\a\5\h\g\o\2\f\e\s\z\5\t\s\w\1\0\l\n\c\z\u\i\4\n\0\f\l\s\g\b\0\0\q\3\l\x\8\9\a\c\3\x\w\f\m\k\v\7\p\a\i\s\r\p\b\5\8\5\n\v\t\j\5\l\1\h\g\7\i\s\9\4\7\4\0\n\o\u\w\4\t\1\u\y\q\r\3\s\b\h\e\7\i\e\j\e\f\m\6\k\n\1\i\n\w\u\s\o\b\v\1\b\3\y\b\q\q\g\0\u\r\w\5\e\g\a\3\8\o\h\g\q\r\h\z\u\k\y\e\8\0\a\u\x\n\f\f\2\g\5\t\7\0\q\j\f\q\j\l\z\b\t\c\k\l\m\n\w\3\k\8\d\q\i\b\9\l\6\2\b\g\6\i\h\t\p\0\z\4\p\d\t\x\q\s\z\v\p\s\m\b\o\i\e\k\y\h\t\l\d\v\e\7\b\n\z\o\7\w\1\y\6\s\k\7\b\0\f\7\w\y\a\q\9\u\0\j\y\6\q\7\t\u\w\8\y\8\u\4\r\e\d\y\7\k\n\f\2\w\h\1\t\f\a\j\1\l\e\r\i\f\8\k\0\u\y\s\h\6\e\c\w\o\g\j\w\v\f\s\8\7\3\w\h\m\x\e\d\l\e\u\9\e\3\w\p\2\y\i\9\6\q\g\q\m\o\x\h\i\f\b\1\m\s\3\s\i\t\b\l\i\d\i\r\f\w\9\0\v\m\x\w\l\y\3\u\g\g\k\y\1\v\r\k\i\y\w\p\m\o\v\d\n\a\5\r\1\k\l\n\e\9\g\i\v\k\6\i\m\o\9\b\h\k\2\3\j\x\e\i\e\s\y\1\4\x\x\8\s\p\1\5\2\p\7\j\c\e\u\i\0\2\y\1\e\0\5\o\m\h\x\n\6\a\j\r\x\d\j\q\c\h\p\z\j\0\3\g\q\f\m\z\4\y\9\z\s\g\2\a\a\7\f\5\t\i\m\c\d\0\7\p\3\a\x\h\v\b\q\x\d\6\1\6\f\2\u\z\z\a\3\w\8\y\j\l\5\z\s\0\a\j\c\m\x\y\a\5\3\p\d\l\l\g\q\k\x\k\a\p\z\h\h\w\o\r\s\g\q\8\8\t\h\f\m\i\q\c\n\d\c\5\4\5\e\5\3\p\0\a\7\e\8\2\k\w\a\z\k\7\5 ]] 00:09:11.047 23:53:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:09:11.047 23:53:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@69 -- # [[ twpx60fw0gnxunywjqmu0ao7p1opibjz5woppdyp1bg38twrh33l0cxk6smlxdjzan63kqeeavdtnoienhs1s86hzcbiiu9sy2op9lv7gfvdojy7is77fl1auqq6q2vm6ut27is87ruc0gd9np024xkl6u4r9udessc667imd7z3yddqczu0hnm32qk8fj8mxhsh9txibvgfksbbxd8jmnthpf7c8anjq7e6gjjnz6t8l0q8pi5xui9il6hdelyik2g4ejtt4uk1vsj03nl45vgsx5mfj79ce9mgbp7b6lnaxlm7qwpkpec8s7xtelqky65emoh60tj3q897cn8a8pae5yqwcnzyklgftghli106zemf16hdmp7t500bgok3aqjjpzu6lnhe506kyszowwf55bp9g9mv89yy9a4t38bepvg4gwehgn8i7w07uc674arrue20kw28lmhypj7wpx032a5hgo2fesz5tsw10lnczui4n0flsgb00q3lx89ac3xwfmkv7paisrpb585nvtj5l1hg7is94740nouw4t1uyqr3sbhe7iejefm6kn1inwusobv1b3ybqqg0urw5ega38ohgqrhzukye80auxnff2g5t70qjfqjlzbtcklmnw3k8dqib9l62bg6ihtp0z4pdtxqszvpsmboiekyhtldve7bnzo7w1y6sk7b0f7wyaq9u0jy6q7tuw8y8u4redy7knf2wh1tfaj1lerif8k0uysh6ecwogjwvfs873whmxedleu9e3wp2yi96qgqmoxhifb1ms3sitblidirfw90vmxwly3uggky1vrkiywpmovdna5r1klne9givk6imo9bhk23jxeiesy14xx8sp152p7jceui02y1e05omhxn6ajrxdjqchpzj03gqfmz4y9zsg2aa7f5timcd07p3axhvbqxd616f2uzza3w8yjl5zs0ajcmxya53pdllgqkxkapzhhworsgq88thfmiqcndc545e53p0a7e82kwazk75 == \t\w\p\x\6\0\f\w\0\g\n\x\u\n\y\w\j\q\m\u\0\a\o\7\p\1\o\p\i\b\j\z\5\w\o\p\p\d\y\p\1\b\g\3\8\t\w\r\h\3\3\l\0\c\x\k\6\s\m\l\x\d\j\z\a\n\6\3\k\q\e\e\a\v\d\t\n\o\i\e\n\h\s\1\s\8\6\h\z\c\b\i\i\u\9\s\y\2\o\p\9\l\v\7\g\f\v\d\o\j\y\7\i\s\7\7\f\l\1\a\u\q\q\6\q\2\v\m\6\u\t\2\7\i\s\8\7\r\u\c\0\g\d\9\n\p\0\2\4\x\k\l\6\u\4\r\9\u\d\e\s\s\c\6\6\7\i\m\d\7\z\3\y\d\d\q\c\z\u\0\h\n\m\3\2\q\k\8\f\j\8\m\x\h\s\h\9\t\x\i\b\v\g\f\k\s\b\b\x\d\8\j\m\n\t\h\p\f\7\c\8\a\n\j\q\7\e\6\g\j\j\n\z\6\t\8\l\0\q\8\p\i\5\x\u\i\9\i\l\6\h\d\e\l\y\i\k\2\g\4\e\j\t\t\4\u\k\1\v\s\j\0\3\n\l\4\5\v\g\s\x\5\m\f\j\7\9\c\e\9\m\g\b\p\7\b\6\l\n\a\x\l\m\7\q\w\p\k\p\e\c\8\s\7\x\t\e\l\q\k\y\6\5\e\m\o\h\6\0\t\j\3\q\8\9\7\c\n\8\a\8\p\a\e\5\y\q\w\c\n\z\y\k\l\g\f\t\g\h\l\i\1\0\6\z\e\m\f\1\6\h\d\m\p\7\t\5\0\0\b\g\o\k\3\a\q\j\j\p\z\u\6\l\n\h\e\5\0\6\k\y\s\z\o\w\w\f\5\5\b\p\9\g\9\m\v\8\9\y\y\9\a\4\t\3\8\b\e\p\v\g\4\g\w\e\h\g\n\8\i\7\w\0\7\u\c\6\7\4\a\r\r\u\e\2\0\k\w\2\8\l\m\h\y\p\j\7\w\p\x\0\3\2\a\5\h\g\o\2\f\e\s\z\5\t\s\w\1\0\l\n\c\z\u\i\4\n\0\f\l\s\g\b\0\0\q\3\l\x\8\9\a\c\3\x\w\f\m\k\v\7\p\a\i\s\r\p\b\5\8\5\n\v\t\j\5\l\1\h\g\7\i\s\9\4\7\4\0\n\o\u\w\4\t\1\u\y\q\r\3\s\b\h\e\7\i\e\j\e\f\m\6\k\n\1\i\n\w\u\s\o\b\v\1\b\3\y\b\q\q\g\0\u\r\w\5\e\g\a\3\8\o\h\g\q\r\h\z\u\k\y\e\8\0\a\u\x\n\f\f\2\g\5\t\7\0\q\j\f\q\j\l\z\b\t\c\k\l\m\n\w\3\k\8\d\q\i\b\9\l\6\2\b\g\6\i\h\t\p\0\z\4\p\d\t\x\q\s\z\v\p\s\m\b\o\i\e\k\y\h\t\l\d\v\e\7\b\n\z\o\7\w\1\y\6\s\k\7\b\0\f\7\w\y\a\q\9\u\0\j\y\6\q\7\t\u\w\8\y\8\u\4\r\e\d\y\7\k\n\f\2\w\h\1\t\f\a\j\1\l\e\r\i\f\8\k\0\u\y\s\h\6\e\c\w\o\g\j\w\v\f\s\8\7\3\w\h\m\x\e\d\l\e\u\9\e\3\w\p\2\y\i\9\6\q\g\q\m\o\x\h\i\f\b\1\m\s\3\s\i\t\b\l\i\d\i\r\f\w\9\0\v\m\x\w\l\y\3\u\g\g\k\y\1\v\r\k\i\y\w\p\m\o\v\d\n\a\5\r\1\k\l\n\e\9\g\i\v\k\6\i\m\o\9\b\h\k\2\3\j\x\e\i\e\s\y\1\4\x\x\8\s\p\1\5\2\p\7\j\c\e\u\i\0\2\y\1\e\0\5\o\m\h\x\n\6\a\j\r\x\d\j\q\c\h\p\z\j\0\3\g\q\f\m\z\4\y\9\z\s\g\2\a\a\7\f\5\t\i\m\c\d\0\7\p\3\a\x\h\v\b\q\x\d\6\1\6\f\2\u\z\z\a\3\w\8\y\j\l\5\z\s\0\a\j\c\m\x\y\a\5\3\p\d\l\l\g\q\k\x\k\a\p\z\h\h\w\o\r\s\g\q\8\8\t\h\f\m\i\q\c\n\d\c\5\4\5\e\5\3\p\0\a\7\e\8\2\k\w\a\z\k\7\5 ]] 00:09:11.047 23:53:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:09:11.316 23:53:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # gen_conf 00:09:11.316 23:53:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:09:11.316 23:53:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:09:11.316 23:53:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:09:11.587 { 00:09:11.587 "subsystems": [ 00:09:11.587 { 00:09:11.587 "subsystem": "bdev", 00:09:11.587 "config": [ 00:09:11.587 { 00:09:11.587 "params": { 00:09:11.587 "block_size": 512, 00:09:11.587 "num_blocks": 1048576, 00:09:11.587 "name": "malloc0" 00:09:11.587 }, 00:09:11.587 "method": "bdev_malloc_create" 00:09:11.587 }, 00:09:11.587 { 00:09:11.587 "params": { 00:09:11.587 "filename": "/dev/zram1", 00:09:11.587 "name": "uring0" 00:09:11.587 }, 00:09:11.587 "method": "bdev_uring_create" 00:09:11.587 }, 00:09:11.587 { 00:09:11.587 "method": "bdev_wait_for_examine" 00:09:11.587 } 00:09:11.587 ] 00:09:11.587 } 00:09:11.587 ] 00:09:11.587 } 00:09:11.587 [2024-11-18 23:53:18.104212] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:09:11.587 [2024-11-18 23:53:18.104389] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63306 ] 00:09:11.846 [2024-11-18 23:53:18.283917] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:11.846 [2024-11-18 23:53:18.373045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:11.846 [2024-11-18 23:53:18.530013] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:13.745  [2024-11-18T23:53:21.372Z] Copying: 134/512 [MB] (134 MBps) [2024-11-18T23:53:22.307Z] Copying: 262/512 [MB] (127 MBps) [2024-11-18T23:53:23.242Z] Copying: 393/512 [MB] (131 MBps) [2024-11-18T23:53:25.140Z] Copying: 512/512 [MB] (average 132 MBps) 00:09:18.448 00:09:18.448 23:53:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:09:18.448 23:53:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:09:18.448 23:53:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:09:18.448 23:53:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:09:18.448 23:53:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # gen_conf 00:09:18.448 23:53:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:09:18.448 23:53:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:09:18.448 23:53:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:09:18.448 { 00:09:18.448 "subsystems": [ 00:09:18.448 { 00:09:18.448 "subsystem": "bdev", 00:09:18.448 "config": [ 00:09:18.448 { 00:09:18.448 "params": { 00:09:18.448 "block_size": 512, 00:09:18.448 "num_blocks": 1048576, 00:09:18.448 "name": "malloc0" 00:09:18.448 }, 00:09:18.448 "method": "bdev_malloc_create" 00:09:18.448 }, 00:09:18.448 { 00:09:18.448 "params": { 00:09:18.448 "filename": "/dev/zram1", 00:09:18.448 "name": "uring0" 00:09:18.448 }, 00:09:18.448 "method": "bdev_uring_create" 00:09:18.448 }, 00:09:18.448 { 00:09:18.448 "params": { 00:09:18.448 "name": "uring0" 00:09:18.448 }, 00:09:18.448 "method": "bdev_uring_delete" 00:09:18.448 }, 00:09:18.448 { 00:09:18.448 "method": "bdev_wait_for_examine" 00:09:18.448 } 00:09:18.448 ] 00:09:18.448 } 00:09:18.448 ] 00:09:18.448 } 00:09:18.448 [2024-11-18 23:53:24.877477] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:09:18.448 [2024-11-18 23:53:24.877671] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63403 ] 00:09:18.448 [2024-11-18 23:53:25.055724] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:18.707 [2024-11-18 23:53:25.142139] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:18.707 [2024-11-18 23:53:25.295591] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:19.274  [2024-11-18T23:53:27.869Z] Copying: 0/0 [B] (average 0 Bps) 00:09:21.177 00:09:21.177 23:53:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:09:21.177 23:53:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@652 -- # local es=0 00:09:21.177 23:53:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:09:21.177 23:53:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:21.177 23:53:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # : 00:09:21.177 23:53:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # gen_conf 00:09:21.177 23:53:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:09:21.177 23:53:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:21.177 23:53:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:21.177 23:53:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:09:21.177 23:53:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:21.177 23:53:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:21.177 23:53:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:21.177 23:53:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:21.177 23:53:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:21.177 23:53:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:09:21.177 { 00:09:21.177 "subsystems": [ 00:09:21.177 { 00:09:21.177 "subsystem": "bdev", 00:09:21.177 "config": [ 00:09:21.177 { 00:09:21.177 "params": { 00:09:21.177 "block_size": 512, 00:09:21.177 "num_blocks": 1048576, 00:09:21.177 "name": "malloc0" 00:09:21.177 }, 00:09:21.177 "method": "bdev_malloc_create" 00:09:21.177 }, 00:09:21.177 { 00:09:21.177 "params": { 00:09:21.177 "filename": "/dev/zram1", 00:09:21.177 "name": "uring0" 00:09:21.177 }, 00:09:21.177 "method": "bdev_uring_create" 00:09:21.177 }, 00:09:21.177 { 00:09:21.177 "params": { 00:09:21.177 "name": "uring0" 00:09:21.177 }, 00:09:21.177 "method": "bdev_uring_delete" 00:09:21.177 }, 00:09:21.177 { 00:09:21.177 "method": "bdev_wait_for_examine" 00:09:21.177 } 00:09:21.177 ] 00:09:21.177 } 00:09:21.177 ] 00:09:21.177 } 00:09:21.177 [2024-11-18 23:53:27.806660] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:09:21.177 [2024-11-18 23:53:27.806841] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63453 ] 00:09:21.435 [2024-11-18 23:53:27.972009] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:21.435 [2024-11-18 23:53:28.059357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:21.694 [2024-11-18 23:53:28.220698] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:22.261 [2024-11-18 23:53:28.720245] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:09:22.262 [2024-11-18 23:53:28.720327] spdk_dd.c: 933:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:09:22.262 [2024-11-18 23:53:28.720344] spdk_dd.c:1090:dd_run: *ERROR*: uring0: No such device 00:09:22.262 [2024-11-18 23:53:28.720363] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:23.638 [2024-11-18 23:53:30.311591] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:23.896 23:53:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@655 -- # es=237 00:09:23.896 23:53:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:23.896 23:53:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@664 -- # es=109 00:09:23.896 23:53:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@665 -- # case "$es" in 00:09:23.896 23:53:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@672 -- # es=1 00:09:23.896 23:53:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:23.896 23:53:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@99 -- # remove_zram_dev 1 00:09:23.896 23:53:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@168 -- # local id=1 00:09:23.896 23:53:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@170 -- # [[ -e /sys/block/zram1 ]] 00:09:23.896 23:53:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@172 -- # echo 1 00:09:23.896 23:53:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@173 -- # echo 1 00:09:23.896 23:53:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:09:24.155 ************************************ 00:09:24.155 END TEST dd_uring_copy 00:09:24.155 ************************************ 00:09:24.155 00:09:24.155 real 0m28.390s 00:09:24.155 user 0m23.122s 00:09:24.155 sys 0m15.731s 00:09:24.155 23:53:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:24.155 23:53:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:09:24.155 00:09:24.155 real 0m28.630s 00:09:24.155 user 0m23.252s 00:09:24.155 sys 0m15.848s 00:09:24.155 23:53:30 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:24.155 ************************************ 00:09:24.155 23:53:30 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:09:24.155 END TEST spdk_dd_uring 00:09:24.155 ************************************ 00:09:24.155 23:53:30 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:09:24.155 23:53:30 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:24.155 23:53:30 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:24.155 23:53:30 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:09:24.155 ************************************ 00:09:24.155 START TEST spdk_dd_sparse 00:09:24.155 ************************************ 00:09:24.155 23:53:30 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:09:24.416 * Looking for test storage... 00:09:24.416 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:09:24.416 23:53:30 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:24.416 23:53:30 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1693 -- # lcov --version 00:09:24.416 23:53:30 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:24.416 23:53:30 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:24.416 23:53:30 spdk_dd.spdk_dd_sparse -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:24.416 23:53:30 spdk_dd.spdk_dd_sparse -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:24.416 23:53:30 spdk_dd.spdk_dd_sparse -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:24.416 23:53:30 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # IFS=.-: 00:09:24.416 23:53:30 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # read -ra ver1 00:09:24.416 23:53:30 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # IFS=.-: 00:09:24.416 23:53:30 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # read -ra ver2 00:09:24.416 23:53:30 spdk_dd.spdk_dd_sparse -- scripts/common.sh@338 -- # local 'op=<' 00:09:24.416 23:53:30 spdk_dd.spdk_dd_sparse -- scripts/common.sh@340 -- # ver1_l=2 00:09:24.416 23:53:30 spdk_dd.spdk_dd_sparse -- scripts/common.sh@341 -- # ver2_l=1 00:09:24.416 23:53:30 spdk_dd.spdk_dd_sparse -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:24.416 23:53:30 spdk_dd.spdk_dd_sparse -- scripts/common.sh@344 -- # case "$op" in 00:09:24.416 23:53:30 spdk_dd.spdk_dd_sparse -- scripts/common.sh@345 -- # : 1 00:09:24.416 23:53:30 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:24.416 23:53:30 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:24.416 23:53:30 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # decimal 1 00:09:24.416 23:53:30 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=1 00:09:24.416 23:53:30 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:24.416 23:53:30 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 1 00:09:24.416 23:53:30 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # ver1[v]=1 00:09:24.416 23:53:30 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # decimal 2 00:09:24.416 23:53:30 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=2 00:09:24.416 23:53:30 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:24.416 23:53:30 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 2 00:09:24.416 23:53:31 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # ver2[v]=2 00:09:24.416 23:53:31 spdk_dd.spdk_dd_sparse -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:24.416 23:53:31 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:24.416 23:53:31 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # return 0 00:09:24.416 23:53:31 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:24.416 23:53:31 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:24.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.416 --rc genhtml_branch_coverage=1 00:09:24.416 --rc genhtml_function_coverage=1 00:09:24.416 --rc genhtml_legend=1 00:09:24.416 --rc geninfo_all_blocks=1 00:09:24.416 --rc geninfo_unexecuted_blocks=1 00:09:24.416 00:09:24.416 ' 00:09:24.416 23:53:31 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:24.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.416 --rc genhtml_branch_coverage=1 00:09:24.416 --rc genhtml_function_coverage=1 00:09:24.416 --rc genhtml_legend=1 00:09:24.416 --rc geninfo_all_blocks=1 00:09:24.416 --rc geninfo_unexecuted_blocks=1 00:09:24.416 00:09:24.416 ' 00:09:24.416 23:53:31 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:24.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.416 --rc genhtml_branch_coverage=1 00:09:24.416 --rc genhtml_function_coverage=1 00:09:24.416 --rc genhtml_legend=1 00:09:24.416 --rc geninfo_all_blocks=1 00:09:24.416 --rc geninfo_unexecuted_blocks=1 00:09:24.416 00:09:24.416 ' 00:09:24.416 23:53:31 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:24.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.416 --rc genhtml_branch_coverage=1 00:09:24.416 --rc genhtml_function_coverage=1 00:09:24.416 --rc genhtml_legend=1 00:09:24.416 --rc geninfo_all_blocks=1 00:09:24.416 --rc geninfo_unexecuted_blocks=1 00:09:24.416 00:09:24.416 ' 00:09:24.416 23:53:31 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:24.416 23:53:31 spdk_dd.spdk_dd_sparse -- scripts/common.sh@15 -- # shopt -s extglob 00:09:24.416 23:53:31 spdk_dd.spdk_dd_sparse -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:24.416 23:53:31 spdk_dd.spdk_dd_sparse -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:24.416 23:53:31 spdk_dd.spdk_dd_sparse -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:24.416 23:53:31 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.416 23:53:31 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.416 23:53:31 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.416 23:53:31 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:09:24.416 23:53:31 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.416 23:53:31 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:09:24.416 23:53:31 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:09:24.416 23:53:31 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:09:24.416 23:53:31 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:09:24.416 23:53:31 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:09:24.416 23:53:31 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:09:24.416 23:53:31 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:09:24.416 23:53:31 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:09:24.416 23:53:31 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:09:24.416 23:53:31 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:09:24.416 23:53:31 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:09:24.416 1+0 records in 00:09:24.416 1+0 records out 00:09:24.416 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00559379 s, 750 MB/s 00:09:24.416 23:53:31 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:09:24.416 1+0 records in 00:09:24.416 1+0 records out 00:09:24.416 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00468363 s, 896 MB/s 00:09:24.416 23:53:31 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:09:24.416 1+0 records in 00:09:24.416 1+0 records out 00:09:24.416 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00504558 s, 831 MB/s 00:09:24.416 23:53:31 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:09:24.416 23:53:31 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:24.416 23:53:31 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:24.416 23:53:31 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:09:24.416 ************************************ 00:09:24.416 START TEST dd_sparse_file_to_file 00:09:24.416 ************************************ 00:09:24.416 23:53:31 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1129 -- # file_to_file 00:09:24.416 23:53:31 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:09:24.416 23:53:31 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:09:24.416 23:53:31 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:09:24.416 23:53:31 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:09:24.417 23:53:31 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:09:24.417 23:53:31 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:09:24.417 23:53:31 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:09:24.417 23:53:31 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:09:24.417 23:53:31 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:09:24.417 23:53:31 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:09:24.675 { 00:09:24.675 "subsystems": [ 00:09:24.675 { 00:09:24.675 "subsystem": "bdev", 00:09:24.675 "config": [ 00:09:24.675 { 00:09:24.675 "params": { 00:09:24.675 "block_size": 4096, 00:09:24.675 "filename": "dd_sparse_aio_disk", 00:09:24.675 "name": "dd_aio" 00:09:24.675 }, 00:09:24.675 "method": "bdev_aio_create" 00:09:24.675 }, 00:09:24.675 { 00:09:24.675 "params": { 00:09:24.675 "lvs_name": "dd_lvstore", 00:09:24.675 "bdev_name": "dd_aio" 00:09:24.675 }, 00:09:24.675 "method": "bdev_lvol_create_lvstore" 00:09:24.675 }, 00:09:24.675 { 00:09:24.675 "method": "bdev_wait_for_examine" 00:09:24.675 } 00:09:24.675 ] 00:09:24.675 } 00:09:24.675 ] 00:09:24.675 } 00:09:24.675 [2024-11-18 23:53:31.141708] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:09:24.675 [2024-11-18 23:53:31.141869] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63572 ] 00:09:24.675 [2024-11-18 23:53:31.308433] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:24.935 [2024-11-18 23:53:31.394767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:24.935 [2024-11-18 23:53:31.559533] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:25.194  [2024-11-18T23:53:32.834Z] Copying: 12/36 [MB] (average 1090 MBps) 00:09:26.142 00:09:26.142 23:53:32 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:09:26.142 23:53:32 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:09:26.143 23:53:32 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:09:26.143 23:53:32 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:09:26.143 23:53:32 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:09:26.143 23:53:32 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:09:26.143 23:53:32 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:09:26.143 23:53:32 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:09:26.143 23:53:32 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:09:26.143 23:53:32 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:09:26.143 00:09:26.143 real 0m1.632s 00:09:26.143 user 0m1.320s 00:09:26.143 sys 0m0.902s 00:09:26.143 23:53:32 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:26.143 23:53:32 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:09:26.143 ************************************ 00:09:26.143 END TEST dd_sparse_file_to_file 00:09:26.143 ************************************ 00:09:26.143 23:53:32 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:09:26.143 23:53:32 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:26.143 23:53:32 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:26.143 23:53:32 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:09:26.143 ************************************ 00:09:26.143 START TEST dd_sparse_file_to_bdev 00:09:26.143 ************************************ 00:09:26.143 23:53:32 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1129 -- # file_to_bdev 00:09:26.143 23:53:32 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:09:26.143 23:53:32 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:09:26.143 23:53:32 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:09:26.143 23:53:32 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:09:26.143 23:53:32 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:09:26.143 23:53:32 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:09:26.143 23:53:32 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:09:26.143 23:53:32 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:09:26.143 { 00:09:26.143 "subsystems": [ 00:09:26.143 { 00:09:26.143 "subsystem": "bdev", 00:09:26.143 "config": [ 00:09:26.143 { 00:09:26.143 "params": { 00:09:26.143 "block_size": 4096, 00:09:26.143 "filename": "dd_sparse_aio_disk", 00:09:26.143 "name": "dd_aio" 00:09:26.143 }, 00:09:26.143 "method": "bdev_aio_create" 00:09:26.143 }, 00:09:26.143 { 00:09:26.143 "params": { 00:09:26.143 "lvs_name": "dd_lvstore", 00:09:26.143 "lvol_name": "dd_lvol", 00:09:26.143 "size_in_mib": 36, 00:09:26.143 "thin_provision": true 00:09:26.143 }, 00:09:26.143 "method": "bdev_lvol_create" 00:09:26.143 }, 00:09:26.143 { 00:09:26.143 "method": "bdev_wait_for_examine" 00:09:26.143 } 00:09:26.143 ] 00:09:26.143 } 00:09:26.143 ] 00:09:26.143 } 00:09:26.143 [2024-11-18 23:53:32.814808] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:09:26.143 [2024-11-18 23:53:32.814950] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63627 ] 00:09:26.402 [2024-11-18 23:53:32.978169] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:26.402 [2024-11-18 23:53:33.059374] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:26.661 [2024-11-18 23:53:33.217518] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:26.920  [2024-11-18T23:53:34.550Z] Copying: 12/36 [MB] (average 521 MBps) 00:09:27.858 00:09:27.858 00:09:27.858 real 0m1.576s 00:09:27.858 user 0m1.307s 00:09:27.858 sys 0m0.880s 00:09:27.858 23:53:34 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:27.858 23:53:34 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:09:27.858 ************************************ 00:09:27.858 END TEST dd_sparse_file_to_bdev 00:09:27.858 ************************************ 00:09:27.858 23:53:34 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:09:27.858 23:53:34 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:27.858 23:53:34 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:27.858 23:53:34 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:09:27.858 ************************************ 00:09:27.858 START TEST dd_sparse_bdev_to_file 00:09:27.858 ************************************ 00:09:27.858 23:53:34 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1129 -- # bdev_to_file 00:09:27.858 23:53:34 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:09:27.859 23:53:34 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:09:27.859 23:53:34 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:09:27.859 23:53:34 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:09:27.859 23:53:34 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:09:27.859 23:53:34 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:09:27.859 23:53:34 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:09:27.859 23:53:34 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:09:27.859 { 00:09:27.859 "subsystems": [ 00:09:27.859 { 00:09:27.859 "subsystem": "bdev", 00:09:27.859 "config": [ 00:09:27.859 { 00:09:27.859 "params": { 00:09:27.859 "block_size": 4096, 00:09:27.859 "filename": "dd_sparse_aio_disk", 00:09:27.859 "name": "dd_aio" 00:09:27.859 }, 00:09:27.859 "method": "bdev_aio_create" 00:09:27.859 }, 00:09:27.859 { 00:09:27.859 "method": "bdev_wait_for_examine" 00:09:27.859 } 00:09:27.859 ] 00:09:27.859 } 00:09:27.859 ] 00:09:27.859 } 00:09:27.859 [2024-11-18 23:53:34.471537] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:09:27.859 [2024-11-18 23:53:34.471764] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63671 ] 00:09:28.118 [2024-11-18 23:53:34.652143] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:28.118 [2024-11-18 23:53:34.740392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:28.377 [2024-11-18 23:53:34.894360] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:28.377  [2024-11-18T23:53:36.063Z] Copying: 12/36 [MB] (average 1200 MBps) 00:09:29.371 00:09:29.371 23:53:35 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:09:29.371 23:53:35 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:09:29.371 23:53:35 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:09:29.371 23:53:35 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:09:29.371 23:53:35 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:09:29.371 23:53:35 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:09:29.371 23:53:35 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:09:29.371 23:53:35 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:09:29.371 23:53:35 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:09:29.371 23:53:35 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:09:29.371 00:09:29.371 real 0m1.623s 00:09:29.371 user 0m1.316s 00:09:29.371 sys 0m0.901s 00:09:29.371 23:53:35 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:29.371 23:53:35 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:09:29.371 ************************************ 00:09:29.371 END TEST dd_sparse_bdev_to_file 00:09:29.371 ************************************ 00:09:29.371 23:53:36 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:09:29.371 23:53:36 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:09:29.371 23:53:36 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:09:29.371 23:53:36 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:09:29.371 23:53:36 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:09:29.371 00:09:29.371 real 0m5.234s 00:09:29.371 user 0m4.131s 00:09:29.371 sys 0m2.885s 00:09:29.371 23:53:36 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:29.371 23:53:36 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:09:29.371 ************************************ 00:09:29.371 END TEST spdk_dd_sparse 00:09:29.371 ************************************ 00:09:29.630 23:53:36 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:09:29.630 23:53:36 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:29.630 23:53:36 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:29.630 23:53:36 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:09:29.630 ************************************ 00:09:29.630 START TEST spdk_dd_negative 00:09:29.630 ************************************ 00:09:29.630 23:53:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:09:29.630 * Looking for test storage... 00:09:29.630 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:09:29.630 23:53:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:29.630 23:53:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1693 -- # lcov --version 00:09:29.630 23:53:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:29.630 23:53:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:29.630 23:53:36 spdk_dd.spdk_dd_negative -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:29.630 23:53:36 spdk_dd.spdk_dd_negative -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:29.630 23:53:36 spdk_dd.spdk_dd_negative -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:29.630 23:53:36 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # IFS=.-: 00:09:29.630 23:53:36 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # read -ra ver1 00:09:29.630 23:53:36 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # IFS=.-: 00:09:29.630 23:53:36 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # read -ra ver2 00:09:29.630 23:53:36 spdk_dd.spdk_dd_negative -- scripts/common.sh@338 -- # local 'op=<' 00:09:29.630 23:53:36 spdk_dd.spdk_dd_negative -- scripts/common.sh@340 -- # ver1_l=2 00:09:29.630 23:53:36 spdk_dd.spdk_dd_negative -- scripts/common.sh@341 -- # ver2_l=1 00:09:29.630 23:53:36 spdk_dd.spdk_dd_negative -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:29.630 23:53:36 spdk_dd.spdk_dd_negative -- scripts/common.sh@344 -- # case "$op" in 00:09:29.630 23:53:36 spdk_dd.spdk_dd_negative -- scripts/common.sh@345 -- # : 1 00:09:29.630 23:53:36 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:29.630 23:53:36 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:29.630 23:53:36 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # decimal 1 00:09:29.630 23:53:36 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=1 00:09:29.630 23:53:36 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:29.630 23:53:36 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 1 00:09:29.630 23:53:36 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # ver1[v]=1 00:09:29.630 23:53:36 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # decimal 2 00:09:29.630 23:53:36 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=2 00:09:29.630 23:53:36 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:29.630 23:53:36 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 2 00:09:29.630 23:53:36 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # ver2[v]=2 00:09:29.630 23:53:36 spdk_dd.spdk_dd_negative -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:29.630 23:53:36 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:29.630 23:53:36 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # return 0 00:09:29.630 23:53:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:29.630 23:53:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:29.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.630 --rc genhtml_branch_coverage=1 00:09:29.630 --rc genhtml_function_coverage=1 00:09:29.630 --rc genhtml_legend=1 00:09:29.630 --rc geninfo_all_blocks=1 00:09:29.630 --rc geninfo_unexecuted_blocks=1 00:09:29.630 00:09:29.630 ' 00:09:29.630 23:53:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:29.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.630 --rc genhtml_branch_coverage=1 00:09:29.631 --rc genhtml_function_coverage=1 00:09:29.631 --rc genhtml_legend=1 00:09:29.631 --rc geninfo_all_blocks=1 00:09:29.631 --rc geninfo_unexecuted_blocks=1 00:09:29.631 00:09:29.631 ' 00:09:29.631 23:53:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:29.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.631 --rc genhtml_branch_coverage=1 00:09:29.631 --rc genhtml_function_coverage=1 00:09:29.631 --rc genhtml_legend=1 00:09:29.631 --rc geninfo_all_blocks=1 00:09:29.631 --rc geninfo_unexecuted_blocks=1 00:09:29.631 00:09:29.631 ' 00:09:29.631 23:53:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:29.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.631 --rc genhtml_branch_coverage=1 00:09:29.631 --rc genhtml_function_coverage=1 00:09:29.631 --rc genhtml_legend=1 00:09:29.631 --rc geninfo_all_blocks=1 00:09:29.631 --rc geninfo_unexecuted_blocks=1 00:09:29.631 00:09:29.631 ' 00:09:29.631 23:53:36 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:29.631 23:53:36 spdk_dd.spdk_dd_negative -- scripts/common.sh@15 -- # shopt -s extglob 00:09:29.631 23:53:36 spdk_dd.spdk_dd_negative -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:29.631 23:53:36 spdk_dd.spdk_dd_negative -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:29.631 23:53:36 spdk_dd.spdk_dd_negative -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:29.631 23:53:36 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.631 23:53:36 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.631 23:53:36 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.631 23:53:36 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:09:29.631 23:53:36 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.631 23:53:36 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@210 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:29.631 23:53:36 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@211 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:29.631 23:53:36 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@213 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:29.631 23:53:36 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@214 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:29.631 23:53:36 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@216 -- # run_test dd_invalid_arguments invalid_arguments 00:09:29.631 23:53:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:29.631 23:53:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:29.631 23:53:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:29.631 ************************************ 00:09:29.631 START TEST dd_invalid_arguments 00:09:29.631 ************************************ 00:09:29.631 23:53:36 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1129 -- # invalid_arguments 00:09:29.631 23:53:36 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:09:29.631 23:53:36 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@652 -- # local es=0 00:09:29.631 23:53:36 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:09:29.631 23:53:36 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:29.631 23:53:36 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:29.631 23:53:36 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:29.631 23:53:36 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:29.631 23:53:36 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:29.631 23:53:36 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:29.631 23:53:36 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:29.631 23:53:36 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:29.631 23:53:36 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:09:29.889 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:09:29.889 00:09:29.889 CPU options: 00:09:29.889 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:09:29.889 (like [0,1,10]) 00:09:29.889 --lcores lcore to CPU mapping list. The list is in the format: 00:09:29.889 [<,lcores[@CPUs]>...] 00:09:29.889 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:09:29.889 Within the group, '-' is used for range separator, 00:09:29.889 ',' is used for single number separator. 00:09:29.889 '( )' can be omitted for single element group, 00:09:29.889 '@' can be omitted if cpus and lcores have the same value 00:09:29.889 --disable-cpumask-locks Disable CPU core lock files. 00:09:29.889 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:09:29.889 pollers in the app support interrupt mode) 00:09:29.889 -p, --main-core main (primary) core for DPDK 00:09:29.889 00:09:29.889 Configuration options: 00:09:29.890 -c, --config, --json JSON config file 00:09:29.890 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:09:29.890 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:09:29.890 --wait-for-rpc wait for RPCs to initialize subsystems 00:09:29.890 --rpcs-allowed comma-separated list of permitted RPCS 00:09:29.890 --json-ignore-init-errors don't exit on invalid config entry 00:09:29.890 00:09:29.890 Memory options: 00:09:29.890 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:09:29.890 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:09:29.890 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:09:29.890 -R, --huge-unlink unlink huge files after initialization 00:09:29.890 -n, --mem-channels number of memory channels used for DPDK 00:09:29.890 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:09:29.890 --msg-mempool-size global message memory pool size in count (default: 262143) 00:09:29.890 --no-huge run without using hugepages 00:09:29.890 --enforce-numa enforce NUMA allocations from the specified NUMA node 00:09:29.890 -i, --shm-id shared memory ID (optional) 00:09:29.890 -g, --single-file-segments force creating just one hugetlbfs file 00:09:29.890 00:09:29.890 PCI options: 00:09:29.890 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:09:29.890 -B, --pci-blocked pci addr to block (can be used more than once) 00:09:29.890 -u, --no-pci disable PCI access 00:09:29.890 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:09:29.890 00:09:29.890 Log options: 00:09:29.890 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:09:29.890 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:09:29.890 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:09:29.890 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:09:29.890 blobfs_rw, fsdev, fsdev_aio, ftl_core, ftl_init, fuse_dispatcher, 00:09:29.890 gpt_parse, idxd, ioat, iscsi_init, json_util, keyring, log_rpc, lvol, 00:09:29.890 lvol_rpc, notify_rpc, nvme, nvme_auth, nvme_cuse, nvme_vfio, opal, 00:09:29.890 reactor, rpc, rpc_client, scsi, sock, sock_posix, spdk_aio_mgr_io, 00:09:29.890 thread, trace, uring, vbdev_delay, vbdev_gpt, vbdev_lvol, vbdev_opal, 00:09:29.890 vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, vfio_user, vfu, 00:09:29.890 vfu_virtio, vfu_virtio_blk, vfu_virtio_fs, vfu_virtio_fs_data, 00:09:29.890 vfu_virtio_io, vfu_virtio_scsi, vfu_virtio_scsi_data, virtio, 00:09:29.890 virtio_blk, virtio_dev, virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:09:29.890 --silence-noticelog disable notice level logging to stderr 00:09:29.890 00:09:29.890 Trace options: 00:09:29.890 --num-trace-entries number of trace entries for each core, must be power of 2, 00:09:29.890 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:09:29.890 [2024-11-18 23:53:36.418294] spdk_dd.c:1480:main: *ERROR*: Invalid arguments 00:09:29.890 setting 0 to disable trace (default 32768) 00:09:29.890 Tracepoints vary in size and can use more than one trace entry. 00:09:29.890 -e, --tpoint-group [:] 00:09:29.890 group_name - tracepoint group name for spdk trace buffers (scsi, bdev, 00:09:29.890 ftl, blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, 00:09:29.890 blob, bdev_raid, scheduler, all). 00:09:29.890 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:09:29.890 a tracepoint group. First tpoint inside a group can be enabled by 00:09:29.890 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:09:29.890 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:09:29.890 in /include/spdk_internal/trace_defs.h 00:09:29.890 00:09:29.890 Other options: 00:09:29.890 -h, --help show this usage 00:09:29.890 -v, --version print SPDK version 00:09:29.890 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:09:29.890 --env-context Opaque context for use of the env implementation 00:09:29.890 00:09:29.890 Application specific: 00:09:29.890 [--------- DD Options ---------] 00:09:29.890 --if Input file. Must specify either --if or --ib. 00:09:29.890 --ib Input bdev. Must specifier either --if or --ib 00:09:29.890 --of Output file. Must specify either --of or --ob. 00:09:29.890 --ob Output bdev. Must specify either --of or --ob. 00:09:29.890 --iflag Input file flags. 00:09:29.890 --oflag Output file flags. 00:09:29.890 --bs I/O unit size (default: 4096) 00:09:29.890 --qd Queue depth (default: 2) 00:09:29.890 --count I/O unit count. The number of I/O units to copy. (default: all) 00:09:29.890 --skip Skip this many I/O units at start of input. (default: 0) 00:09:29.890 --seek Skip this many I/O units at start of output. (default: 0) 00:09:29.890 --aio Force usage of AIO. (by default io_uring is used if available) 00:09:29.890 --sparse Enable hole skipping in input target 00:09:29.890 Available iflag and oflag values: 00:09:29.890 append - append mode 00:09:29.890 direct - use direct I/O for data 00:09:29.890 directory - fail unless a directory 00:09:29.890 dsync - use synchronized I/O for data 00:09:29.890 noatime - do not update access time 00:09:29.890 noctty - do not assign controlling terminal from file 00:09:29.890 nofollow - do not follow symlinks 00:09:29.890 nonblock - use non-blocking I/O 00:09:29.890 sync - use synchronized I/O for data and metadata 00:09:29.890 23:53:36 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@655 -- # es=2 00:09:29.890 23:53:36 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:29.890 23:53:36 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:29.890 23:53:36 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:29.890 00:09:29.890 real 0m0.174s 00:09:29.890 user 0m0.088s 00:09:29.890 sys 0m0.083s 00:09:29.890 23:53:36 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:29.890 23:53:36 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:09:29.890 ************************************ 00:09:29.890 END TEST dd_invalid_arguments 00:09:29.890 ************************************ 00:09:29.890 23:53:36 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@217 -- # run_test dd_double_input double_input 00:09:29.890 23:53:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:29.890 23:53:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:29.890 23:53:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:29.890 ************************************ 00:09:29.890 START TEST dd_double_input 00:09:29.890 ************************************ 00:09:29.890 23:53:36 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1129 -- # double_input 00:09:29.890 23:53:36 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:09:29.890 23:53:36 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@652 -- # local es=0 00:09:29.890 23:53:36 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:09:29.890 23:53:36 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:29.890 23:53:36 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:29.890 23:53:36 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:29.890 23:53:36 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:29.890 23:53:36 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:29.890 23:53:36 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:29.890 23:53:36 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:29.890 23:53:36 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:29.890 23:53:36 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:09:30.149 [2024-11-18 23:53:36.617746] spdk_dd.c:1487:main: *ERROR*: You may specify either --if or --ib, but not both. 00:09:30.149 23:53:36 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@655 -- # es=22 00:09:30.149 23:53:36 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:30.149 23:53:36 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:30.149 23:53:36 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:30.149 00:09:30.149 real 0m0.146s 00:09:30.149 user 0m0.088s 00:09:30.149 sys 0m0.056s 00:09:30.149 23:53:36 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:30.149 23:53:36 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:09:30.149 ************************************ 00:09:30.149 END TEST dd_double_input 00:09:30.149 ************************************ 00:09:30.149 23:53:36 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@218 -- # run_test dd_double_output double_output 00:09:30.149 23:53:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:30.149 23:53:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:30.149 23:53:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:30.149 ************************************ 00:09:30.149 START TEST dd_double_output 00:09:30.149 ************************************ 00:09:30.149 23:53:36 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1129 -- # double_output 00:09:30.149 23:53:36 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:09:30.149 23:53:36 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@652 -- # local es=0 00:09:30.149 23:53:36 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:09:30.149 23:53:36 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:30.149 23:53:36 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:30.149 23:53:36 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:30.149 23:53:36 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:30.149 23:53:36 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:30.149 23:53:36 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:30.149 23:53:36 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:30.149 23:53:36 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:30.149 23:53:36 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:09:30.149 [2024-11-18 23:53:36.812811] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 00:09:30.408 23:53:36 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@655 -- # es=22 00:09:30.408 23:53:36 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:30.408 23:53:36 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:30.408 23:53:36 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:30.408 00:09:30.408 real 0m0.137s 00:09:30.408 user 0m0.070s 00:09:30.408 sys 0m0.065s 00:09:30.408 23:53:36 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:30.408 23:53:36 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:09:30.408 ************************************ 00:09:30.408 END TEST dd_double_output 00:09:30.408 ************************************ 00:09:30.408 23:53:36 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@219 -- # run_test dd_no_input no_input 00:09:30.408 23:53:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:30.408 23:53:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:30.408 23:53:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:30.408 ************************************ 00:09:30.408 START TEST dd_no_input 00:09:30.408 ************************************ 00:09:30.408 23:53:36 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1129 -- # no_input 00:09:30.408 23:53:36 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:09:30.408 23:53:36 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@652 -- # local es=0 00:09:30.408 23:53:36 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:09:30.408 23:53:36 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:30.408 23:53:36 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:30.408 23:53:36 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:30.408 23:53:36 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:30.408 23:53:36 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:30.408 23:53:36 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:30.408 23:53:36 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:30.408 23:53:36 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:30.408 23:53:36 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:09:30.408 [2024-11-18 23:53:37.023329] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 00:09:30.408 23:53:37 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@655 -- # es=22 00:09:30.408 23:53:37 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:30.408 23:53:37 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:30.409 23:53:37 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:30.409 00:09:30.409 real 0m0.166s 00:09:30.409 user 0m0.084s 00:09:30.409 sys 0m0.081s 00:09:30.409 23:53:37 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:30.409 ************************************ 00:09:30.409 END TEST dd_no_input 00:09:30.409 ************************************ 00:09:30.409 23:53:37 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:09:30.668 23:53:37 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@220 -- # run_test dd_no_output no_output 00:09:30.668 23:53:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:30.668 23:53:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:30.668 23:53:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:30.668 ************************************ 00:09:30.668 START TEST dd_no_output 00:09:30.668 ************************************ 00:09:30.668 23:53:37 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1129 -- # no_output 00:09:30.668 23:53:37 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:30.668 23:53:37 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@652 -- # local es=0 00:09:30.668 23:53:37 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:30.668 23:53:37 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:30.668 23:53:37 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:30.668 23:53:37 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:30.668 23:53:37 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:30.668 23:53:37 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:30.668 23:53:37 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:30.668 23:53:37 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:30.668 23:53:37 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:30.668 23:53:37 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:30.668 [2024-11-18 23:53:37.245796] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 00:09:30.668 23:53:37 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@655 -- # es=22 00:09:30.668 23:53:37 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:30.668 23:53:37 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:30.668 23:53:37 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:30.668 00:09:30.668 real 0m0.173s 00:09:30.668 user 0m0.104s 00:09:30.668 sys 0m0.067s 00:09:30.668 23:53:37 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:30.668 ************************************ 00:09:30.668 END TEST dd_no_output 00:09:30.668 ************************************ 00:09:30.668 23:53:37 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:09:30.668 23:53:37 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@221 -- # run_test dd_wrong_blocksize wrong_blocksize 00:09:30.668 23:53:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:30.668 23:53:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:30.668 23:53:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:30.927 ************************************ 00:09:30.927 START TEST dd_wrong_blocksize 00:09:30.927 ************************************ 00:09:30.927 23:53:37 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1129 -- # wrong_blocksize 00:09:30.927 23:53:37 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:09:30.927 23:53:37 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@652 -- # local es=0 00:09:30.927 23:53:37 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:09:30.927 23:53:37 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:30.927 23:53:37 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:30.927 23:53:37 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:30.927 23:53:37 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:30.927 23:53:37 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:30.927 23:53:37 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:30.927 23:53:37 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:30.927 23:53:37 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:30.927 23:53:37 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:09:30.927 [2024-11-18 23:53:37.477985] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 00:09:30.927 23:53:37 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@655 -- # es=22 00:09:30.927 23:53:37 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:30.927 23:53:37 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:30.927 23:53:37 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:30.927 00:09:30.927 real 0m0.173s 00:09:30.927 user 0m0.100s 00:09:30.927 sys 0m0.071s 00:09:30.927 ************************************ 00:09:30.927 END TEST dd_wrong_blocksize 00:09:30.927 ************************************ 00:09:30.927 23:53:37 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:30.927 23:53:37 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:09:30.927 23:53:37 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@222 -- # run_test dd_smaller_blocksize smaller_blocksize 00:09:30.927 23:53:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:30.927 23:53:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:30.927 23:53:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:30.927 ************************************ 00:09:30.927 START TEST dd_smaller_blocksize 00:09:30.927 ************************************ 00:09:30.927 23:53:37 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1129 -- # smaller_blocksize 00:09:30.927 23:53:37 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:09:30.927 23:53:37 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@652 -- # local es=0 00:09:30.927 23:53:37 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:09:30.927 23:53:37 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:30.927 23:53:37 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:30.927 23:53:37 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:30.927 23:53:37 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:30.927 23:53:37 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:30.928 23:53:37 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:30.928 23:53:37 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:30.928 23:53:37 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:30.928 23:53:37 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:09:31.186 [2024-11-18 23:53:37.703228] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:09:31.186 [2024-11-18 23:53:37.703406] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63921 ] 00:09:31.446 [2024-11-18 23:53:37.886933] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:31.446 [2024-11-18 23:53:38.011961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:31.705 [2024-11-18 23:53:38.169903] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:31.964 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:09:32.224 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:09:32.224 [2024-11-18 23:53:38.816068] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:09:32.224 [2024-11-18 23:53:38.816490] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:32.792 [2024-11-18 23:53:39.388612] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:33.051 23:53:39 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@655 -- # es=244 00:09:33.051 23:53:39 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:33.051 23:53:39 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@664 -- # es=116 00:09:33.051 23:53:39 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@665 -- # case "$es" in 00:09:33.051 23:53:39 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@672 -- # es=1 00:09:33.051 23:53:39 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:33.051 00:09:33.051 real 0m2.022s 00:09:33.051 user 0m1.299s 00:09:33.051 sys 0m0.611s 00:09:33.051 23:53:39 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:33.051 ************************************ 00:09:33.051 END TEST dd_smaller_blocksize 00:09:33.051 ************************************ 00:09:33.051 23:53:39 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:09:33.051 23:53:39 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@223 -- # run_test dd_invalid_count invalid_count 00:09:33.051 23:53:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:33.051 23:53:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:33.051 23:53:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:33.051 ************************************ 00:09:33.051 START TEST dd_invalid_count 00:09:33.051 ************************************ 00:09:33.051 23:53:39 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1129 -- # invalid_count 00:09:33.051 23:53:39 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:09:33.051 23:53:39 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@652 -- # local es=0 00:09:33.051 23:53:39 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:09:33.051 23:53:39 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:33.051 23:53:39 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:33.051 23:53:39 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:33.051 23:53:39 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:33.051 23:53:39 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:33.051 23:53:39 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:33.051 23:53:39 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:33.052 23:53:39 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:33.052 23:53:39 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:09:33.311 [2024-11-18 23:53:39.776125] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 00:09:33.311 23:53:39 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@655 -- # es=22 00:09:33.311 23:53:39 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:33.311 23:53:39 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:33.311 ************************************ 00:09:33.311 END TEST dd_invalid_count 00:09:33.311 ************************************ 00:09:33.311 23:53:39 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:33.311 00:09:33.311 real 0m0.164s 00:09:33.311 user 0m0.093s 00:09:33.311 sys 0m0.069s 00:09:33.311 23:53:39 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:33.311 23:53:39 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:09:33.311 23:53:39 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@224 -- # run_test dd_invalid_oflag invalid_oflag 00:09:33.311 23:53:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:33.311 23:53:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:33.311 23:53:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:33.311 ************************************ 00:09:33.311 START TEST dd_invalid_oflag 00:09:33.311 ************************************ 00:09:33.311 23:53:39 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1129 -- # invalid_oflag 00:09:33.311 23:53:39 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:09:33.311 23:53:39 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@652 -- # local es=0 00:09:33.311 23:53:39 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:09:33.311 23:53:39 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:33.311 23:53:39 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:33.311 23:53:39 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:33.311 23:53:39 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:33.311 23:53:39 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:33.311 23:53:39 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:33.311 23:53:39 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:33.311 23:53:39 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:33.311 23:53:39 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:09:33.311 [2024-11-18 23:53:39.991936] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 00:09:33.571 23:53:40 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@655 -- # es=22 00:09:33.571 23:53:40 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:33.571 23:53:40 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:33.571 23:53:40 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:33.571 00:09:33.571 real 0m0.173s 00:09:33.571 user 0m0.091s 00:09:33.571 sys 0m0.080s 00:09:33.571 ************************************ 00:09:33.571 END TEST dd_invalid_oflag 00:09:33.571 ************************************ 00:09:33.571 23:53:40 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:33.571 23:53:40 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:09:33.572 23:53:40 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@225 -- # run_test dd_invalid_iflag invalid_iflag 00:09:33.572 23:53:40 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:33.572 23:53:40 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:33.572 23:53:40 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:33.572 ************************************ 00:09:33.572 START TEST dd_invalid_iflag 00:09:33.572 ************************************ 00:09:33.572 23:53:40 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1129 -- # invalid_iflag 00:09:33.572 23:53:40 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:09:33.572 23:53:40 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@652 -- # local es=0 00:09:33.572 23:53:40 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:09:33.572 23:53:40 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:33.572 23:53:40 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:33.572 23:53:40 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:33.572 23:53:40 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:33.572 23:53:40 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:33.572 23:53:40 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:33.572 23:53:40 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:33.572 23:53:40 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:33.572 23:53:40 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:09:33.572 [2024-11-18 23:53:40.190817] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 00:09:33.572 23:53:40 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@655 -- # es=22 00:09:33.572 23:53:40 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:33.572 23:53:40 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:33.572 23:53:40 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:33.572 00:09:33.572 real 0m0.131s 00:09:33.572 user 0m0.068s 00:09:33.572 sys 0m0.063s 00:09:33.572 ************************************ 00:09:33.572 END TEST dd_invalid_iflag 00:09:33.572 ************************************ 00:09:33.572 23:53:40 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:33.572 23:53:40 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:09:33.832 23:53:40 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@226 -- # run_test dd_unknown_flag unknown_flag 00:09:33.832 23:53:40 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:33.832 23:53:40 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:33.832 23:53:40 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:33.832 ************************************ 00:09:33.832 START TEST dd_unknown_flag 00:09:33.832 ************************************ 00:09:33.832 23:53:40 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1129 -- # unknown_flag 00:09:33.832 23:53:40 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:09:33.832 23:53:40 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@652 -- # local es=0 00:09:33.832 23:53:40 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:09:33.832 23:53:40 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:33.832 23:53:40 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:33.832 23:53:40 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:33.832 23:53:40 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:33.832 23:53:40 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:33.832 23:53:40 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:33.832 23:53:40 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:33.833 23:53:40 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:33.833 23:53:40 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:09:33.833 [2024-11-18 23:53:40.408910] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:09:33.833 [2024-11-18 23:53:40.409083] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64034 ] 00:09:34.092 [2024-11-18 23:53:40.587538] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:34.092 [2024-11-18 23:53:40.673459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:34.351 [2024-11-18 23:53:40.842728] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:34.351 [2024-11-18 23:53:40.923000] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:09:34.351 [2024-11-18 23:53:40.923284] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:34.351 [2024-11-18 23:53:40.923400] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:09:34.351 [2024-11-18 23:53:40.923515] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:34.351 [2024-11-18 23:53:40.923810] spdk_dd.c:1218:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:09:34.351 [2024-11-18 23:53:40.923942] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:34.351 [2024-11-18 23:53:40.924088] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice 00:09:34.351 [2024-11-18 23:53:40.924273] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice 00:09:34.920 [2024-11-18 23:53:41.486211] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:35.179 23:53:41 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@655 -- # es=234 00:09:35.179 23:53:41 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:35.179 23:53:41 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@664 -- # es=106 00:09:35.179 23:53:41 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@665 -- # case "$es" in 00:09:35.179 23:53:41 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@672 -- # es=1 00:09:35.179 23:53:41 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:35.179 00:09:35.179 real 0m1.405s 00:09:35.179 user 0m1.096s 00:09:35.179 sys 0m0.202s 00:09:35.179 23:53:41 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:35.179 23:53:41 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:09:35.179 ************************************ 00:09:35.179 END TEST dd_unknown_flag 00:09:35.179 ************************************ 00:09:35.179 23:53:41 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@227 -- # run_test dd_invalid_json invalid_json 00:09:35.179 23:53:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:35.179 23:53:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:35.179 23:53:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:35.179 ************************************ 00:09:35.179 START TEST dd_invalid_json 00:09:35.179 ************************************ 00:09:35.179 23:53:41 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1129 -- # invalid_json 00:09:35.180 23:53:41 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:09:35.180 23:53:41 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # : 00:09:35.180 23:53:41 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@652 -- # local es=0 00:09:35.180 23:53:41 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:09:35.180 23:53:41 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:35.180 23:53:41 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:35.180 23:53:41 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:35.180 23:53:41 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:35.180 23:53:41 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:35.180 23:53:41 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:35.180 23:53:41 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:35.180 23:53:41 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:35.180 23:53:41 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:09:35.439 [2024-11-18 23:53:41.871727] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:09:35.439 [2024-11-18 23:53:41.871897] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64079 ] 00:09:35.439 [2024-11-18 23:53:42.050919] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:35.698 [2024-11-18 23:53:42.133488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:35.698 [2024-11-18 23:53:42.133594] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:09:35.698 [2024-11-18 23:53:42.133665] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:09:35.698 [2024-11-18 23:53:42.133683] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:35.698 [2024-11-18 23:53:42.133747] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:35.698 23:53:42 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@655 -- # es=234 00:09:35.698 23:53:42 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:35.698 23:53:42 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@664 -- # es=106 00:09:35.698 23:53:42 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@665 -- # case "$es" in 00:09:35.698 23:53:42 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@672 -- # es=1 00:09:35.698 23:53:42 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:35.698 00:09:35.698 real 0m0.600s 00:09:35.698 user 0m0.357s 00:09:35.698 sys 0m0.139s 00:09:35.698 23:53:42 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:35.698 ************************************ 00:09:35.698 END TEST dd_invalid_json 00:09:35.698 ************************************ 00:09:35.698 23:53:42 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:09:35.957 23:53:42 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@228 -- # run_test dd_invalid_seek invalid_seek 00:09:35.957 23:53:42 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:35.957 23:53:42 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:35.957 23:53:42 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:35.957 ************************************ 00:09:35.957 START TEST dd_invalid_seek 00:09:35.957 ************************************ 00:09:35.958 23:53:42 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1129 -- # invalid_seek 00:09:35.958 23:53:42 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@102 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:09:35.958 23:53:42 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:09:35.958 23:53:42 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # local -A method_bdev_malloc_create_0 00:09:35.958 23:53:42 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@108 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:09:35.958 23:53:42 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:09:35.958 23:53:42 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # local -A method_bdev_malloc_create_1 00:09:35.958 23:53:42 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:09:35.958 23:53:42 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # gen_conf 00:09:35.958 23:53:42 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@652 -- # local es=0 00:09:35.958 23:53:42 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:09:35.958 23:53:42 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/common.sh@31 -- # xtrace_disable 00:09:35.958 23:53:42 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:35.958 23:53:42 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:09:35.958 23:53:42 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:35.958 23:53:42 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:35.958 23:53:42 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:35.958 23:53:42 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:35.958 23:53:42 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:35.958 23:53:42 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:35.958 23:53:42 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:35.958 23:53:42 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:09:35.958 { 00:09:35.958 "subsystems": [ 00:09:35.958 { 00:09:35.958 "subsystem": "bdev", 00:09:35.958 "config": [ 00:09:35.958 { 00:09:35.958 "params": { 00:09:35.958 "block_size": 512, 00:09:35.958 "num_blocks": 512, 00:09:35.958 "name": "malloc0" 00:09:35.958 }, 00:09:35.958 "method": "bdev_malloc_create" 00:09:35.958 }, 00:09:35.958 { 00:09:35.958 "params": { 00:09:35.958 "block_size": 512, 00:09:35.958 "num_blocks": 512, 00:09:35.958 "name": "malloc1" 00:09:35.958 }, 00:09:35.958 "method": "bdev_malloc_create" 00:09:35.958 }, 00:09:35.958 { 00:09:35.958 "method": "bdev_wait_for_examine" 00:09:35.958 } 00:09:35.958 ] 00:09:35.958 } 00:09:35.958 ] 00:09:35.958 } 00:09:35.958 [2024-11-18 23:53:42.514332] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:09:35.958 [2024-11-18 23:53:42.514525] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64105 ] 00:09:36.217 [2024-11-18 23:53:42.691934] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:36.217 [2024-11-18 23:53:42.788647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:36.476 [2024-11-18 23:53:42.962523] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:36.476 [2024-11-18 23:53:43.074032] spdk_dd.c:1145:dd_run: *ERROR*: --seek value too big (513) - only 512 blocks available in output 00:09:36.476 [2024-11-18 23:53:43.074123] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:37.045 [2024-11-18 23:53:43.660890] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:37.305 23:53:43 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@655 -- # es=228 00:09:37.305 23:53:43 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:37.305 23:53:43 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@664 -- # es=100 00:09:37.305 23:53:43 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@665 -- # case "$es" in 00:09:37.305 23:53:43 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@672 -- # es=1 00:09:37.305 23:53:43 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:37.305 00:09:37.305 real 0m1.503s 00:09:37.305 user 0m1.246s 00:09:37.305 sys 0m0.210s 00:09:37.305 ************************************ 00:09:37.305 END TEST dd_invalid_seek 00:09:37.305 ************************************ 00:09:37.305 23:53:43 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:37.305 23:53:43 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:09:37.305 23:53:43 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@229 -- # run_test dd_invalid_skip invalid_skip 00:09:37.305 23:53:43 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:37.305 23:53:43 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:37.305 23:53:43 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:37.305 ************************************ 00:09:37.305 START TEST dd_invalid_skip 00:09:37.305 ************************************ 00:09:37.305 23:53:43 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1129 -- # invalid_skip 00:09:37.305 23:53:43 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@125 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:09:37.305 23:53:43 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:09:37.305 23:53:43 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # local -A method_bdev_malloc_create_0 00:09:37.305 23:53:43 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@131 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:09:37.305 23:53:43 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:09:37.305 23:53:43 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # local -A method_bdev_malloc_create_1 00:09:37.305 23:53:43 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:09:37.305 23:53:43 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@652 -- # local es=0 00:09:37.305 23:53:43 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # gen_conf 00:09:37.305 23:53:43 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:09:37.305 23:53:43 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/common.sh@31 -- # xtrace_disable 00:09:37.305 23:53:43 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:37.305 23:53:43 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:09:37.305 23:53:43 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:37.305 23:53:43 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:37.305 23:53:43 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:37.305 23:53:43 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:37.305 23:53:43 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:37.305 23:53:43 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:37.305 23:53:43 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:37.305 23:53:43 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:09:37.564 { 00:09:37.564 "subsystems": [ 00:09:37.564 { 00:09:37.564 "subsystem": "bdev", 00:09:37.564 "config": [ 00:09:37.564 { 00:09:37.564 "params": { 00:09:37.564 "block_size": 512, 00:09:37.564 "num_blocks": 512, 00:09:37.564 "name": "malloc0" 00:09:37.564 }, 00:09:37.564 "method": "bdev_malloc_create" 00:09:37.564 }, 00:09:37.564 { 00:09:37.564 "params": { 00:09:37.564 "block_size": 512, 00:09:37.564 "num_blocks": 512, 00:09:37.564 "name": "malloc1" 00:09:37.564 }, 00:09:37.564 "method": "bdev_malloc_create" 00:09:37.564 }, 00:09:37.564 { 00:09:37.564 "method": "bdev_wait_for_examine" 00:09:37.564 } 00:09:37.564 ] 00:09:37.564 } 00:09:37.564 ] 00:09:37.564 } 00:09:37.564 [2024-11-18 23:53:44.075958] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:09:37.564 [2024-11-18 23:53:44.076166] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64155 ] 00:09:37.824 [2024-11-18 23:53:44.256809] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:37.824 [2024-11-18 23:53:44.339859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:37.824 [2024-11-18 23:53:44.488537] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:38.084 [2024-11-18 23:53:44.598282] spdk_dd.c:1102:dd_run: *ERROR*: --skip value too big (513) - only 512 blocks available in input 00:09:38.084 [2024-11-18 23:53:44.598354] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:38.652 [2024-11-18 23:53:45.220080] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:38.912 23:53:45 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@655 -- # es=228 00:09:38.912 23:53:45 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:38.912 ************************************ 00:09:38.912 END TEST dd_invalid_skip 00:09:38.912 ************************************ 00:09:38.912 23:53:45 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@664 -- # es=100 00:09:38.912 23:53:45 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@665 -- # case "$es" in 00:09:38.912 23:53:45 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@672 -- # es=1 00:09:38.912 23:53:45 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:38.912 00:09:38.912 real 0m1.498s 00:09:38.912 user 0m1.236s 00:09:38.912 sys 0m0.214s 00:09:38.912 23:53:45 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:38.912 23:53:45 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:09:38.912 23:53:45 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@230 -- # run_test dd_invalid_input_count invalid_input_count 00:09:38.912 23:53:45 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:38.912 23:53:45 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:38.912 23:53:45 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:38.912 ************************************ 00:09:38.912 START TEST dd_invalid_input_count 00:09:38.912 ************************************ 00:09:38.912 23:53:45 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1129 -- # invalid_input_count 00:09:38.912 23:53:45 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@149 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:09:38.912 23:53:45 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:09:38.912 23:53:45 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # local -A method_bdev_malloc_create_0 00:09:38.912 23:53:45 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@155 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:09:38.912 23:53:45 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:09:38.912 23:53:45 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # local -A method_bdev_malloc_create_1 00:09:38.912 23:53:45 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:09:38.912 23:53:45 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@652 -- # local es=0 00:09:38.912 23:53:45 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # gen_conf 00:09:38.912 23:53:45 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:09:38.912 23:53:45 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/common.sh@31 -- # xtrace_disable 00:09:38.912 23:53:45 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:38.912 23:53:45 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:09:38.912 23:53:45 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:38.912 23:53:45 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:38.912 23:53:45 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:38.912 23:53:45 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:38.912 23:53:45 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:38.912 23:53:45 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:38.912 23:53:45 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:38.912 23:53:45 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:09:38.912 { 00:09:38.912 "subsystems": [ 00:09:38.912 { 00:09:38.912 "subsystem": "bdev", 00:09:38.912 "config": [ 00:09:38.912 { 00:09:38.912 "params": { 00:09:38.912 "block_size": 512, 00:09:38.912 "num_blocks": 512, 00:09:38.912 "name": "malloc0" 00:09:38.912 }, 00:09:38.912 "method": "bdev_malloc_create" 00:09:38.912 }, 00:09:38.912 { 00:09:38.912 "params": { 00:09:38.912 "block_size": 512, 00:09:38.912 "num_blocks": 512, 00:09:38.912 "name": "malloc1" 00:09:38.912 }, 00:09:38.912 "method": "bdev_malloc_create" 00:09:38.912 }, 00:09:38.912 { 00:09:38.912 "method": "bdev_wait_for_examine" 00:09:38.912 } 00:09:38.912 ] 00:09:38.912 } 00:09:38.912 ] 00:09:38.912 } 00:09:39.174 [2024-11-18 23:53:45.623393] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:09:39.174 [2024-11-18 23:53:45.623784] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64196 ] 00:09:39.174 [2024-11-18 23:53:45.801264] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:39.441 [2024-11-18 23:53:45.903642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:39.441 [2024-11-18 23:53:46.063874] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:39.700 [2024-11-18 23:53:46.186760] spdk_dd.c:1110:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available from input 00:09:39.701 [2024-11-18 23:53:46.186824] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:40.270 [2024-11-18 23:53:46.798233] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:40.530 23:53:47 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@655 -- # es=228 00:09:40.530 23:53:47 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:40.530 23:53:47 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@664 -- # es=100 00:09:40.530 23:53:47 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@665 -- # case "$es" in 00:09:40.530 23:53:47 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@672 -- # es=1 00:09:40.530 23:53:47 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:40.530 00:09:40.530 real 0m1.527s 00:09:40.530 user 0m1.279s 00:09:40.530 sys 0m0.197s 00:09:40.530 ************************************ 00:09:40.530 END TEST dd_invalid_input_count 00:09:40.530 ************************************ 00:09:40.530 23:53:47 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:40.530 23:53:47 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:09:40.530 23:53:47 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@231 -- # run_test dd_invalid_output_count invalid_output_count 00:09:40.530 23:53:47 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:40.530 23:53:47 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:40.530 23:53:47 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:40.530 ************************************ 00:09:40.530 START TEST dd_invalid_output_count 00:09:40.530 ************************************ 00:09:40.530 23:53:47 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1129 -- # invalid_output_count 00:09:40.530 23:53:47 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@173 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:09:40.530 23:53:47 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:09:40.530 23:53:47 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # local -A method_bdev_malloc_create_0 00:09:40.530 23:53:47 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:09:40.530 23:53:47 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@652 -- # local es=0 00:09:40.530 23:53:47 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # gen_conf 00:09:40.530 23:53:47 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:09:40.530 23:53:47 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/common.sh@31 -- # xtrace_disable 00:09:40.530 23:53:47 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:40.530 23:53:47 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:09:40.530 23:53:47 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:40.530 23:53:47 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:40.530 23:53:47 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:40.530 23:53:47 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:40.530 23:53:47 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:40.530 23:53:47 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:40.530 23:53:47 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:40.530 23:53:47 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:09:40.530 { 00:09:40.530 "subsystems": [ 00:09:40.530 { 00:09:40.530 "subsystem": "bdev", 00:09:40.530 "config": [ 00:09:40.530 { 00:09:40.530 "params": { 00:09:40.530 "block_size": 512, 00:09:40.530 "num_blocks": 512, 00:09:40.530 "name": "malloc0" 00:09:40.530 }, 00:09:40.530 "method": "bdev_malloc_create" 00:09:40.530 }, 00:09:40.530 { 00:09:40.530 "method": "bdev_wait_for_examine" 00:09:40.530 } 00:09:40.530 ] 00:09:40.530 } 00:09:40.530 ] 00:09:40.530 } 00:09:40.789 [2024-11-18 23:53:47.226937] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:09:40.789 [2024-11-18 23:53:47.227334] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64242 ] 00:09:40.789 [2024-11-18 23:53:47.407536] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:41.049 [2024-11-18 23:53:47.491004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:41.049 [2024-11-18 23:53:47.637684] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:41.308 [2024-11-18 23:53:47.737466] spdk_dd.c:1152:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available in output 00:09:41.308 [2024-11-18 23:53:47.737868] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:41.876 [2024-11-18 23:53:48.340488] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:41.876 23:53:48 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@655 -- # es=228 00:09:41.876 23:53:48 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:42.136 23:53:48 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@664 -- # es=100 00:09:42.136 ************************************ 00:09:42.136 END TEST dd_invalid_output_count 00:09:42.136 ************************************ 00:09:42.136 23:53:48 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@665 -- # case "$es" in 00:09:42.136 23:53:48 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@672 -- # es=1 00:09:42.136 23:53:48 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:42.136 00:09:42.136 real 0m1.473s 00:09:42.136 user 0m1.223s 00:09:42.136 sys 0m0.221s 00:09:42.136 23:53:48 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:42.136 23:53:48 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:09:42.136 23:53:48 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@232 -- # run_test dd_bs_not_multiple bs_not_multiple 00:09:42.136 23:53:48 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:42.136 23:53:48 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:42.136 23:53:48 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:42.136 ************************************ 00:09:42.136 START TEST dd_bs_not_multiple 00:09:42.136 ************************************ 00:09:42.136 23:53:48 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1129 -- # bs_not_multiple 00:09:42.136 23:53:48 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@190 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:09:42.136 23:53:48 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:09:42.136 23:53:48 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # local -A method_bdev_malloc_create_0 00:09:42.136 23:53:48 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@196 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:09:42.136 23:53:48 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:09:42.136 23:53:48 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # local -A method_bdev_malloc_create_1 00:09:42.136 23:53:48 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # gen_conf 00:09:42.136 23:53:48 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:09:42.136 23:53:48 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@652 -- # local es=0 00:09:42.136 23:53:48 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/common.sh@31 -- # xtrace_disable 00:09:42.136 23:53:48 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:09:42.136 23:53:48 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:09:42.136 23:53:48 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:42.136 23:53:48 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:42.136 23:53:48 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:42.136 23:53:48 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:42.136 23:53:48 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:42.136 23:53:48 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:42.136 23:53:48 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:42.136 23:53:48 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:42.136 23:53:48 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:09:42.136 { 00:09:42.136 "subsystems": [ 00:09:42.136 { 00:09:42.136 "subsystem": "bdev", 00:09:42.136 "config": [ 00:09:42.136 { 00:09:42.136 "params": { 00:09:42.136 "block_size": 512, 00:09:42.136 "num_blocks": 512, 00:09:42.136 "name": "malloc0" 00:09:42.136 }, 00:09:42.136 "method": "bdev_malloc_create" 00:09:42.136 }, 00:09:42.136 { 00:09:42.136 "params": { 00:09:42.136 "block_size": 512, 00:09:42.136 "num_blocks": 512, 00:09:42.136 "name": "malloc1" 00:09:42.136 }, 00:09:42.136 "method": "bdev_malloc_create" 00:09:42.136 }, 00:09:42.136 { 00:09:42.136 "method": "bdev_wait_for_examine" 00:09:42.136 } 00:09:42.136 ] 00:09:42.136 } 00:09:42.136 ] 00:09:42.136 } 00:09:42.136 [2024-11-18 23:53:48.739406] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:09:42.136 [2024-11-18 23:53:48.739569] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64287 ] 00:09:42.396 [2024-11-18 23:53:48.920978] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:42.396 [2024-11-18 23:53:49.007201] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:42.655 [2024-11-18 23:53:49.155913] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:42.655 [2024-11-18 23:53:49.263235] spdk_dd.c:1168:dd_run: *ERROR*: --bs value must be a multiple of input native block size (512) 00:09:42.655 [2024-11-18 23:53:49.263344] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:43.224 [2024-11-18 23:53:49.859442] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:43.484 23:53:50 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@655 -- # es=234 00:09:43.484 23:53:50 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:43.484 23:53:50 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@664 -- # es=106 00:09:43.484 ************************************ 00:09:43.484 END TEST dd_bs_not_multiple 00:09:43.484 ************************************ 00:09:43.484 23:53:50 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@665 -- # case "$es" in 00:09:43.484 23:53:50 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@672 -- # es=1 00:09:43.484 23:53:50 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:43.484 00:09:43.484 real 0m1.468s 00:09:43.484 user 0m1.189s 00:09:43.484 sys 0m0.222s 00:09:43.484 23:53:50 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:43.484 23:53:50 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:09:43.484 ************************************ 00:09:43.484 END TEST spdk_dd_negative 00:09:43.484 ************************************ 00:09:43.484 00:09:43.484 real 0m14.030s 00:09:43.484 user 0m10.133s 00:09:43.484 sys 0m3.236s 00:09:43.484 23:53:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:43.484 23:53:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:43.744 ************************************ 00:09:43.744 END TEST spdk_dd 00:09:43.744 ************************************ 00:09:43.744 00:09:43.744 real 2m45.026s 00:09:43.744 user 2m12.034s 00:09:43.744 sys 1m1.373s 00:09:43.744 23:53:50 spdk_dd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:43.744 23:53:50 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:09:43.744 23:53:50 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:09:43.744 23:53:50 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:09:43.744 23:53:50 -- spdk/autotest.sh@260 -- # timing_exit lib 00:09:43.744 23:53:50 -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:43.744 23:53:50 -- common/autotest_common.sh@10 -- # set +x 00:09:43.744 23:53:50 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:09:43.744 23:53:50 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:09:43.744 23:53:50 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:09:43.744 23:53:50 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:09:43.744 23:53:50 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:09:43.744 23:53:50 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:09:43.744 23:53:50 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:09:43.744 23:53:50 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:43.744 23:53:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:43.744 23:53:50 -- common/autotest_common.sh@10 -- # set +x 00:09:43.744 ************************************ 00:09:43.744 START TEST nvmf_tcp 00:09:43.744 ************************************ 00:09:43.744 23:53:50 nvmf_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:09:43.744 * Looking for test storage... 00:09:43.744 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:09:43.744 23:53:50 nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:43.744 23:53:50 nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:09:43.744 23:53:50 nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:43.744 23:53:50 nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:43.744 23:53:50 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:43.744 23:53:50 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:43.744 23:53:50 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:43.744 23:53:50 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:09:43.744 23:53:50 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:09:43.744 23:53:50 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:09:43.744 23:53:50 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:09:43.744 23:53:50 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:09:43.744 23:53:50 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:09:43.744 23:53:50 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:09:43.744 23:53:50 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:43.744 23:53:50 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:09:43.744 23:53:50 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:09:43.744 23:53:50 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:43.744 23:53:50 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:43.744 23:53:50 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:09:43.744 23:53:50 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:09:43.744 23:53:50 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:43.744 23:53:50 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:09:43.744 23:53:50 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:09:43.744 23:53:50 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:09:44.005 23:53:50 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:09:44.005 23:53:50 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:44.005 23:53:50 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:09:44.005 23:53:50 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:09:44.005 23:53:50 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:44.005 23:53:50 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:44.005 23:53:50 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:09:44.005 23:53:50 nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:44.005 23:53:50 nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:44.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.005 --rc genhtml_branch_coverage=1 00:09:44.005 --rc genhtml_function_coverage=1 00:09:44.005 --rc genhtml_legend=1 00:09:44.005 --rc geninfo_all_blocks=1 00:09:44.005 --rc geninfo_unexecuted_blocks=1 00:09:44.005 00:09:44.005 ' 00:09:44.005 23:53:50 nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:44.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.005 --rc genhtml_branch_coverage=1 00:09:44.005 --rc genhtml_function_coverage=1 00:09:44.005 --rc genhtml_legend=1 00:09:44.005 --rc geninfo_all_blocks=1 00:09:44.005 --rc geninfo_unexecuted_blocks=1 00:09:44.005 00:09:44.005 ' 00:09:44.005 23:53:50 nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:44.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.005 --rc genhtml_branch_coverage=1 00:09:44.005 --rc genhtml_function_coverage=1 00:09:44.005 --rc genhtml_legend=1 00:09:44.005 --rc geninfo_all_blocks=1 00:09:44.005 --rc geninfo_unexecuted_blocks=1 00:09:44.005 00:09:44.005 ' 00:09:44.005 23:53:50 nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:44.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.005 --rc genhtml_branch_coverage=1 00:09:44.005 --rc genhtml_function_coverage=1 00:09:44.005 --rc genhtml_legend=1 00:09:44.005 --rc geninfo_all_blocks=1 00:09:44.005 --rc geninfo_unexecuted_blocks=1 00:09:44.005 00:09:44.005 ' 00:09:44.005 23:53:50 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:09:44.005 23:53:50 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:09:44.005 23:53:50 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:09:44.006 23:53:50 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:44.006 23:53:50 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:44.006 23:53:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:44.006 ************************************ 00:09:44.006 START TEST nvmf_target_core 00:09:44.006 ************************************ 00:09:44.006 23:53:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:09:44.006 * Looking for test storage... 00:09:44.006 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:09:44.006 23:53:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:44.006 23:53:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:09:44.006 23:53:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:44.006 23:53:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:44.006 23:53:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:44.006 23:53:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:44.006 23:53:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:44.006 23:53:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:09:44.006 23:53:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:09:44.006 23:53:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:09:44.006 23:53:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:09:44.006 23:53:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:09:44.006 23:53:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:09:44.006 23:53:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:09:44.006 23:53:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:44.006 23:53:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:09:44.006 23:53:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:09:44.006 23:53:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:44.006 23:53:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:44.006 23:53:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:09:44.006 23:53:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:09:44.006 23:53:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:44.006 23:53:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:09:44.006 23:53:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:09:44.006 23:53:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:09:44.006 23:53:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:09:44.006 23:53:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:44.006 23:53:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:09:44.006 23:53:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:09:44.006 23:53:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:44.006 23:53:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:44.006 23:53:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:09:44.006 23:53:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:44.006 23:53:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:44.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.006 --rc genhtml_branch_coverage=1 00:09:44.006 --rc genhtml_function_coverage=1 00:09:44.006 --rc genhtml_legend=1 00:09:44.006 --rc geninfo_all_blocks=1 00:09:44.006 --rc geninfo_unexecuted_blocks=1 00:09:44.006 00:09:44.006 ' 00:09:44.006 23:53:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:44.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.006 --rc genhtml_branch_coverage=1 00:09:44.006 --rc genhtml_function_coverage=1 00:09:44.006 --rc genhtml_legend=1 00:09:44.006 --rc geninfo_all_blocks=1 00:09:44.006 --rc geninfo_unexecuted_blocks=1 00:09:44.006 00:09:44.006 ' 00:09:44.006 23:53:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:44.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.006 --rc genhtml_branch_coverage=1 00:09:44.006 --rc genhtml_function_coverage=1 00:09:44.006 --rc genhtml_legend=1 00:09:44.006 --rc geninfo_all_blocks=1 00:09:44.006 --rc geninfo_unexecuted_blocks=1 00:09:44.006 00:09:44.006 ' 00:09:44.006 23:53:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:44.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.006 --rc genhtml_branch_coverage=1 00:09:44.006 --rc genhtml_function_coverage=1 00:09:44.006 --rc genhtml_legend=1 00:09:44.006 --rc geninfo_all_blocks=1 00:09:44.006 --rc geninfo_unexecuted_blocks=1 00:09:44.006 00:09:44.006 ' 00:09:44.006 23:53:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:09:44.006 23:53:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:09:44.006 23:53:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:44.006 23:53:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:09:44.006 23:53:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:44.006 23:53:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:44.006 23:53:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:44.006 23:53:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:44.006 23:53:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:44.006 23:53:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:44.006 23:53:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:44.006 23:53:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:44.006 23:53:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:44.006 23:53:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:44.006 23:53:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:09:44.006 23:53:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:09:44.006 23:53:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:44.006 23:53:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:44.006 23:53:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:44.006 23:53:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:44.006 23:53:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:44.006 23:53:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:09:44.006 23:53:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:44.006 23:53:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:44.006 23:53:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:44.006 23:53:50 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.006 23:53:50 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.006 23:53:50 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.006 23:53:50 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:09:44.006 23:53:50 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.006 23:53:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:09:44.006 23:53:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:44.006 23:53:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:44.006 23:53:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:44.006 23:53:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:44.006 23:53:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:44.006 23:53:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:44.006 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:44.006 23:53:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:44.006 23:53:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:44.006 23:53:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:44.006 23:53:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:44.006 23:53:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:09:44.006 23:53:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 1 -eq 0 ]] 00:09:44.006 23:53:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:09:44.006 23:53:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:44.007 23:53:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:44.007 23:53:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:44.007 ************************************ 00:09:44.007 START TEST nvmf_host_management 00:09:44.007 ************************************ 00:09:44.007 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:09:44.267 * Looking for test storage... 00:09:44.267 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:44.267 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:44.267 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:44.267 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:09:44.267 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:44.267 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:44.267 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:44.267 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:44.267 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:09:44.267 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:09:44.267 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:09:44.267 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:09:44.267 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:09:44.267 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:09:44.267 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:09:44.267 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:44.267 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:09:44.267 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:09:44.267 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:44.267 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:44.267 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:09:44.267 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:09:44.267 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:44.267 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:09:44.267 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:09:44.267 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:09:44.267 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:09:44.268 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:44.268 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:09:44.268 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:09:44.268 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:44.268 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:44.268 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:09:44.268 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:44.268 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:44.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.268 --rc genhtml_branch_coverage=1 00:09:44.268 --rc genhtml_function_coverage=1 00:09:44.268 --rc genhtml_legend=1 00:09:44.268 --rc geninfo_all_blocks=1 00:09:44.268 --rc geninfo_unexecuted_blocks=1 00:09:44.268 00:09:44.268 ' 00:09:44.268 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:44.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.268 --rc genhtml_branch_coverage=1 00:09:44.268 --rc genhtml_function_coverage=1 00:09:44.268 --rc genhtml_legend=1 00:09:44.268 --rc geninfo_all_blocks=1 00:09:44.268 --rc geninfo_unexecuted_blocks=1 00:09:44.268 00:09:44.268 ' 00:09:44.268 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:44.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.268 --rc genhtml_branch_coverage=1 00:09:44.268 --rc genhtml_function_coverage=1 00:09:44.268 --rc genhtml_legend=1 00:09:44.268 --rc geninfo_all_blocks=1 00:09:44.268 --rc geninfo_unexecuted_blocks=1 00:09:44.268 00:09:44.268 ' 00:09:44.268 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:44.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.268 --rc genhtml_branch_coverage=1 00:09:44.268 --rc genhtml_function_coverage=1 00:09:44.268 --rc genhtml_legend=1 00:09:44.268 --rc geninfo_all_blocks=1 00:09:44.268 --rc geninfo_unexecuted_blocks=1 00:09:44.268 00:09:44.268 ' 00:09:44.268 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:44.268 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:09:44.268 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:44.268 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:44.268 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:44.268 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:44.268 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:44.268 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:44.268 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:44.268 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:44.268 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:44.268 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:44.268 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:09:44.268 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:09:44.268 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:44.268 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:44.268 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:44.268 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:44.268 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:44.268 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:09:44.268 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:44.268 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:44.268 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:44.268 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.268 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.268 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.268 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:09:44.268 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.268 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:09:44.268 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:44.268 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:44.268 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:44.268 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:44.268 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:44.268 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:44.268 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:44.268 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:44.268 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:44.268 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:44.268 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:44.268 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:44.268 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:09:44.268 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:44.268 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:44.268 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:44.268 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:44.268 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:44.268 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:44.268 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:44.268 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:44.268 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:44.268 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:44.268 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:44.268 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:44.268 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:44.268 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:44.268 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:44.268 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:44.268 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:44.268 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:44.268 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:44.268 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:44.268 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:44.268 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:44.269 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:44.269 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:44.269 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:44.269 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:44.269 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:44.269 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:44.269 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:44.269 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:44.269 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:44.269 Cannot find device "nvmf_init_br" 00:09:44.269 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:09:44.269 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:44.269 Cannot find device "nvmf_init_br2" 00:09:44.269 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:09:44.269 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:44.269 Cannot find device "nvmf_tgt_br" 00:09:44.269 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # true 00:09:44.269 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:44.269 Cannot find device "nvmf_tgt_br2" 00:09:44.269 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # true 00:09:44.269 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:44.269 Cannot find device "nvmf_init_br" 00:09:44.269 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # true 00:09:44.269 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:44.528 Cannot find device "nvmf_init_br2" 00:09:44.528 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # true 00:09:44.528 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:44.528 Cannot find device "nvmf_tgt_br" 00:09:44.528 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # true 00:09:44.528 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:44.528 Cannot find device "nvmf_tgt_br2" 00:09:44.528 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # true 00:09:44.528 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:44.528 Cannot find device "nvmf_br" 00:09:44.528 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # true 00:09:44.528 23:53:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:44.528 Cannot find device "nvmf_init_if" 00:09:44.528 23:53:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # true 00:09:44.528 23:53:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:44.528 Cannot find device "nvmf_init_if2" 00:09:44.528 23:53:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # true 00:09:44.528 23:53:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:44.528 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:44.528 23:53:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # true 00:09:44.528 23:53:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:44.528 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:44.528 23:53:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # true 00:09:44.528 23:53:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:44.528 23:53:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:44.528 23:53:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:44.528 23:53:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:44.528 23:53:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:44.528 23:53:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:44.528 23:53:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:44.528 23:53:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:44.528 23:53:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:44.528 23:53:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:44.528 23:53:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:44.528 23:53:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:44.528 23:53:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:44.528 23:53:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:44.528 23:53:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:44.529 23:53:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:44.529 23:53:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:44.529 23:53:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:44.529 23:53:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:44.529 23:53:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:44.529 23:53:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:44.788 23:53:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:44.788 23:53:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:44.788 23:53:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:44.788 23:53:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:44.788 23:53:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:44.788 23:53:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:44.788 23:53:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:44.788 23:53:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:44.788 23:53:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:44.788 23:53:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:44.788 23:53:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:44.788 23:53:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:44.788 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:44.788 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.086 ms 00:09:44.788 00:09:44.788 --- 10.0.0.3 ping statistics --- 00:09:44.788 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:44.788 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:09:44.788 23:53:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:44.788 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:44.788 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.086 ms 00:09:44.788 00:09:44.788 --- 10.0.0.4 ping statistics --- 00:09:44.788 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:44.788 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:09:44.788 23:53:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:44.788 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:44.788 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:09:44.788 00:09:44.788 --- 10.0.0.1 ping statistics --- 00:09:44.788 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:44.788 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:09:44.788 23:53:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:44.788 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:44.788 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.041 ms 00:09:44.788 00:09:44.788 --- 10.0.0.2 ping statistics --- 00:09:44.788 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:44.788 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:09:44.788 23:53:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:44.788 23:53:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@461 -- # return 0 00:09:44.788 23:53:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:44.788 23:53:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:44.788 23:53:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:44.788 23:53:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:44.788 23:53:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:44.788 23:53:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:44.788 23:53:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:44.789 23:53:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:09:44.789 23:53:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:09:44.789 23:53:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:09:44.789 23:53:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:44.789 23:53:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:44.789 23:53:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:44.789 23:53:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=64634 00:09:44.789 23:53:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:09:44.789 23:53:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 64634 00:09:44.789 23:53:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 64634 ']' 00:09:44.789 23:53:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:44.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:44.789 23:53:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:44.789 23:53:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:44.789 23:53:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:44.789 23:53:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:45.053 [2024-11-18 23:53:51.520099] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:09:45.053 [2024-11-18 23:53:51.520529] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:45.053 [2024-11-18 23:53:51.712543] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:45.312 [2024-11-18 23:53:51.843437] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:45.312 [2024-11-18 23:53:51.843792] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:45.312 [2024-11-18 23:53:51.843844] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:45.312 [2024-11-18 23:53:51.843861] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:45.312 [2024-11-18 23:53:51.843878] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:45.312 [2024-11-18 23:53:51.846040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:45.312 [2024-11-18 23:53:51.846175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:45.312 [2024-11-18 23:53:51.846348] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:45.312 [2024-11-18 23:53:51.846383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:45.572 [2024-11-18 23:53:52.072166] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:45.831 23:53:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:45.831 23:53:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:09:45.831 23:53:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:45.831 23:53:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:45.831 23:53:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:45.831 23:53:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:45.831 23:53:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:45.831 23:53:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.831 23:53:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:45.831 [2024-11-18 23:53:52.436802] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:45.831 23:53:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.831 23:53:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:09:45.831 23:53:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:45.831 23:53:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:45.831 23:53:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:09:45.831 23:53:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:09:45.831 23:53:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:09:45.831 23:53:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.831 23:53:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:46.091 Malloc0 00:09:46.091 [2024-11-18 23:53:52.553640] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:46.091 23:53:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.091 23:53:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:09:46.091 23:53:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:46.091 23:53:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:46.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:46.091 23:53:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=64688 00:09:46.091 23:53:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 64688 /var/tmp/bdevperf.sock 00:09:46.091 23:53:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 64688 ']' 00:09:46.091 23:53:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:46.091 23:53:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:46.091 23:53:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:46.091 23:53:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:46.091 23:53:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:46.091 23:53:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:09:46.091 23:53:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:09:46.091 23:53:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:09:46.091 23:53:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:09:46.091 23:53:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:46.091 23:53:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:46.091 { 00:09:46.091 "params": { 00:09:46.091 "name": "Nvme$subsystem", 00:09:46.091 "trtype": "$TEST_TRANSPORT", 00:09:46.091 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:46.091 "adrfam": "ipv4", 00:09:46.091 "trsvcid": "$NVMF_PORT", 00:09:46.091 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:46.091 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:46.091 "hdgst": ${hdgst:-false}, 00:09:46.091 "ddgst": ${ddgst:-false} 00:09:46.091 }, 00:09:46.091 "method": "bdev_nvme_attach_controller" 00:09:46.091 } 00:09:46.091 EOF 00:09:46.091 )") 00:09:46.091 23:53:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:09:46.091 23:53:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:09:46.091 23:53:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:09:46.091 23:53:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:46.091 "params": { 00:09:46.091 "name": "Nvme0", 00:09:46.091 "trtype": "tcp", 00:09:46.091 "traddr": "10.0.0.3", 00:09:46.091 "adrfam": "ipv4", 00:09:46.091 "trsvcid": "4420", 00:09:46.091 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:46.091 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:09:46.091 "hdgst": false, 00:09:46.091 "ddgst": false 00:09:46.091 }, 00:09:46.091 "method": "bdev_nvme_attach_controller" 00:09:46.091 }' 00:09:46.091 [2024-11-18 23:53:52.737311] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:09:46.091 [2024-11-18 23:53:52.737569] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64688 ] 00:09:46.350 [2024-11-18 23:53:52.946432] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:46.609 [2024-11-18 23:53:53.052127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:46.609 [2024-11-18 23:53:53.230653] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:46.869 Running I/O for 10 seconds... 00:09:47.129 23:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:47.129 23:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:09:47.129 23:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:09:47.129 23:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.129 23:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:47.129 23:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.129 23:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:47.129 23:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:09:47.129 23:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:09:47.129 23:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:09:47.129 23:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:09:47.129 23:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:09:47.129 23:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:09:47.129 23:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:09:47.129 23:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:09:47.129 23:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:09:47.129 23:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.129 23:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:47.129 23:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.129 23:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=387 00:09:47.129 23:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 387 -ge 100 ']' 00:09:47.129 23:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:09:47.129 23:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:09:47.129 23:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:09:47.129 23:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:09:47.129 23:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.129 23:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:47.129 [2024-11-18 23:53:53.767088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:61952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:47.129 [2024-11-18 23:53:53.767322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:47.129 [2024-11-18 23:53:53.767492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:62080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:47.129 [2024-11-18 23:53:53.767650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:47.129 [2024-11-18 23:53:53.767805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:62208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:47.129 [2024-11-18 23:53:53.767952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:47.129 [2024-11-18 23:53:53.768114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:62336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:47.129 [2024-11-18 23:53:53.768284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:47.129 [2024-11-18 23:53:53.768461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:62464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:47.129 [2024-11-18 23:53:53.768583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:47.129 [2024-11-18 23:53:53.768628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:62592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:47.129 [2024-11-18 23:53:53.768645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:47.129 [2024-11-18 23:53:53.768661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:62720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:47.129 [2024-11-18 23:53:53.768675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:47.129 [2024-11-18 23:53:53.768691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:62848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:47.129 [2024-11-18 23:53:53.768704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:47.129 [2024-11-18 23:53:53.768719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:62976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:47.129 [2024-11-18 23:53:53.768733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:47.129 [2024-11-18 23:53:53.768748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:63104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:47.129 [2024-11-18 23:53:53.768762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:47.129 [2024-11-18 23:53:53.768777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:63232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:47.129 [2024-11-18 23:53:53.768790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:47.129 [2024-11-18 23:53:53.768806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:63360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:47.129 [2024-11-18 23:53:53.768819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:47.129 [2024-11-18 23:53:53.768834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:63488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:47.129 [2024-11-18 23:53:53.768848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:47.129 [2024-11-18 23:53:53.768864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:63616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:47.129 [2024-11-18 23:53:53.768877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:47.129 [2024-11-18 23:53:53.768893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:63744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:47.129 [2024-11-18 23:53:53.768906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:47.129 [2024-11-18 23:53:53.768921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:63872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:47.129 [2024-11-18 23:53:53.768934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:47.129 [2024-11-18 23:53:53.768950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:64000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:47.129 [2024-11-18 23:53:53.768963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:47.129 [2024-11-18 23:53:53.768981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:64128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:47.129 [2024-11-18 23:53:53.768994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:47.129 [2024-11-18 23:53:53.769010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:64256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:47.129 [2024-11-18 23:53:53.769023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:47.129 [2024-11-18 23:53:53.769038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:64384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:47.129 [2024-11-18 23:53:53.769052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:47.129 [2024-11-18 23:53:53.769067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:64512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:47.129 [2024-11-18 23:53:53.769080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:47.129 [2024-11-18 23:53:53.769095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:64640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:47.129 [2024-11-18 23:53:53.769108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:47.129 [2024-11-18 23:53:53.769124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:64768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:47.129 [2024-11-18 23:53:53.769137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:47.129 [2024-11-18 23:53:53.769153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:64896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:47.129 [2024-11-18 23:53:53.769166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:47.129 [2024-11-18 23:53:53.769181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:65024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:47.129 [2024-11-18 23:53:53.769195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:47.129 [2024-11-18 23:53:53.769217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:65152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:47.129 [2024-11-18 23:53:53.769232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:47.129 [2024-11-18 23:53:53.769247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:65280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:47.129 [2024-11-18 23:53:53.769260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:47.129 [2024-11-18 23:53:53.769276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:65408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:47.129 [2024-11-18 23:53:53.769289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:47.129 [2024-11-18 23:53:53.769304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:47.129 [2024-11-18 23:53:53.769318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:47.129 [2024-11-18 23:53:53.769333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:57472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:47.129 [2024-11-18 23:53:53.769354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:47.129 [2024-11-18 23:53:53.769369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:57600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:47.129 [2024-11-18 23:53:53.769383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:47.129 [2024-11-18 23:53:53.769398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:57728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:47.129 [2024-11-18 23:53:53.769412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:47.129 [2024-11-18 23:53:53.769427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:57856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:47.129 [2024-11-18 23:53:53.769441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:47.129 [2024-11-18 23:53:53.769456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:57984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:47.129 [2024-11-18 23:53:53.769469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:47.129 [2024-11-18 23:53:53.769484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:58112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:47.129 [2024-11-18 23:53:53.769498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:47.129 [2024-11-18 23:53:53.769513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:58240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:47.129 [2024-11-18 23:53:53.769527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:47.129 [2024-11-18 23:53:53.769542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:58368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:47.129 [2024-11-18 23:53:53.769555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:47.129 [2024-11-18 23:53:53.769570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:58496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:47.129 [2024-11-18 23:53:53.769583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:47.129 [2024-11-18 23:53:53.769611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:58624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:47.129 [2024-11-18 23:53:53.769653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:47.129 [2024-11-18 23:53:53.769670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:58752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:47.129 [2024-11-18 23:53:53.769683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:47.129 [2024-11-18 23:53:53.769699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:58880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:47.129 [2024-11-18 23:53:53.769713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:47.129 [2024-11-18 23:53:53.769743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:59008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:47.129 [2024-11-18 23:53:53.769758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:47.129 [2024-11-18 23:53:53.769775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:59136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:47.129 [2024-11-18 23:53:53.769788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:47.129 [2024-11-18 23:53:53.769803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:59264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:47.129 [2024-11-18 23:53:53.769817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:47.129 [2024-11-18 23:53:53.769832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:59392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:47.129 [2024-11-18 23:53:53.769846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:47.129 [2024-11-18 23:53:53.769861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:59520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:47.129 [2024-11-18 23:53:53.769874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:47.129 [2024-11-18 23:53:53.769890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:59648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:47.129 [2024-11-18 23:53:53.769903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:47.129 [2024-11-18 23:53:53.769919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:59776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:47.129 [2024-11-18 23:53:53.769932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:47.129 [2024-11-18 23:53:53.769949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:59904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:47.129 [2024-11-18 23:53:53.769962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:47.129 [2024-11-18 23:53:53.769978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:60032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:47.129 [2024-11-18 23:53:53.769991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:47.129 [2024-11-18 23:53:53.770006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:60160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:47.129 [2024-11-18 23:53:53.770019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:47.130 [2024-11-18 23:53:53.770035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:60288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:47.130 [2024-11-18 23:53:53.770048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:47.130 [2024-11-18 23:53:53.770064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:60416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:47.130 [2024-11-18 23:53:53.770077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:47.130 [2024-11-18 23:53:53.770092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:60544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:47.130 [2024-11-18 23:53:53.770105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:47.130 [2024-11-18 23:53:53.770120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:60672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:47.130 [2024-11-18 23:53:53.770135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:47.130 [2024-11-18 23:53:53.770150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:60800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:47.130 [2024-11-18 23:53:53.770163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:47.130 [2024-11-18 23:53:53.770178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:60928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:47.130 [2024-11-18 23:53:53.770192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:47.130 [2024-11-18 23:53:53.770219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:61056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:47.130 [2024-11-18 23:53:53.770234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:47.130 [2024-11-18 23:53:53.770249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:61184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:47.130 [2024-11-18 23:53:53.770262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:47.130 [2024-11-18 23:53:53.770277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:61312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:47.130 [2024-11-18 23:53:53.770291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:47.130 [2024-11-18 23:53:53.770306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:61440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:47.130 [2024-11-18 23:53:53.770319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:47.130 [2024-11-18 23:53:53.770334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:61568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:47.130 [2024-11-18 23:53:53.770347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:47.130 [2024-11-18 23:53:53.770363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:61696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:47.130 [2024-11-18 23:53:53.770376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:47.130 [2024-11-18 23:53:53.770391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:61824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:47.130 [2024-11-18 23:53:53.770404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:47.130 [2024-11-18 23:53:53.770420] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b780 is same with the state(6) to be set 00:09:47.130 [2024-11-18 23:53:53.771146] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:09:47.130 [2024-11-18 23:53:53.771304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:47.130 [2024-11-18 23:53:53.771476] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:09:47.130 [2024-11-18 23:53:53.771659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:47.130 [2024-11-18 23:53:53.771832] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:09:47.130 [2024-11-18 23:53:53.771995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:47.130 [2024-11-18 23:53:53.772168] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:09:47.130 [2024-11-18 23:53:53.772386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:47.130 [2024-11-18 23:53:53.772526] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ad80 is same with the state(6) to be set 00:09:47.130 23:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.130 23:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:09:47.130 [2024-11-18 23:53:53.774167] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting contro 23:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.130 23:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:47.130 ller 00:09:47.130 task offset: 61952 on job bdev=Nvme0n1 fails 00:09:47.130 00:09:47.130 Latency(us) 00:09:47.130 [2024-11-18T23:53:53.822Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:47.130 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:09:47.130 Job: Nvme0n1 ended in about 0.36 seconds with error 00:09:47.130 Verification LBA range: start 0x0 length 0x400 00:09:47.130 Nvme0n1 : 0.36 1248.42 78.03 178.35 0.00 43155.87 6404.65 42657.98 00:09:47.130 [2024-11-18T23:53:53.822Z] =================================================================================================================== 00:09:47.130 [2024-11-18T23:53:53.822Z] Total : 1248.42 78.03 178.35 0.00 43155.87 6404.65 42657.98 00:09:47.130 [2024-11-18 23:53:53.779748] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:47.130 [2024-11-18 23:53:53.779824] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:09:47.130 23:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.130 23:53:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:09:47.130 [2024-11-18 23:53:53.787513] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:09:48.612 23:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 64688 00:09:48.612 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (64688) - No such process 00:09:48.612 23:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:09:48.612 23:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:09:48.612 23:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:09:48.612 23:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:09:48.612 23:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:09:48.612 23:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:09:48.612 23:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:48.612 23:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:48.612 { 00:09:48.612 "params": { 00:09:48.612 "name": "Nvme$subsystem", 00:09:48.612 "trtype": "$TEST_TRANSPORT", 00:09:48.612 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:48.612 "adrfam": "ipv4", 00:09:48.612 "trsvcid": "$NVMF_PORT", 00:09:48.612 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:48.612 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:48.612 "hdgst": ${hdgst:-false}, 00:09:48.612 "ddgst": ${ddgst:-false} 00:09:48.612 }, 00:09:48.612 "method": "bdev_nvme_attach_controller" 00:09:48.612 } 00:09:48.612 EOF 00:09:48.612 )") 00:09:48.612 23:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:09:48.612 23:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:09:48.612 23:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:09:48.612 23:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:48.612 "params": { 00:09:48.612 "name": "Nvme0", 00:09:48.612 "trtype": "tcp", 00:09:48.612 "traddr": "10.0.0.3", 00:09:48.612 "adrfam": "ipv4", 00:09:48.612 "trsvcid": "4420", 00:09:48.612 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:48.612 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:09:48.612 "hdgst": false, 00:09:48.612 "ddgst": false 00:09:48.612 }, 00:09:48.612 "method": "bdev_nvme_attach_controller" 00:09:48.612 }' 00:09:48.612 [2024-11-18 23:53:54.885921] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:09:48.612 [2024-11-18 23:53:54.886119] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64727 ] 00:09:48.612 [2024-11-18 23:53:55.057241] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:48.612 [2024-11-18 23:53:55.154371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:48.910 [2024-11-18 23:53:55.333657] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:48.910 Running I/O for 1 seconds... 00:09:50.107 1472.00 IOPS, 92.00 MiB/s 00:09:50.107 Latency(us) 00:09:50.107 [2024-11-18T23:53:56.799Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:50.107 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:09:50.107 Verification LBA range: start 0x0 length 0x400 00:09:50.107 Nvme0n1 : 1.04 1474.04 92.13 0.00 0.00 42642.72 5749.29 37891.72 00:09:50.107 [2024-11-18T23:53:56.799Z] =================================================================================================================== 00:09:50.107 [2024-11-18T23:53:56.799Z] Total : 1474.04 92.13 0.00 0.00 42642.72 5749.29 37891.72 00:09:51.044 23:53:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:09:51.044 23:53:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:09:51.044 23:53:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:09:51.044 23:53:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:09:51.044 23:53:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:09:51.044 23:53:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:51.044 23:53:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:09:51.044 23:53:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:51.044 23:53:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:09:51.044 23:53:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:51.044 23:53:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:51.044 rmmod nvme_tcp 00:09:51.044 rmmod nvme_fabrics 00:09:51.044 rmmod nvme_keyring 00:09:51.044 23:53:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:51.044 23:53:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:09:51.044 23:53:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:09:51.044 23:53:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 64634 ']' 00:09:51.044 23:53:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 64634 00:09:51.044 23:53:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 64634 ']' 00:09:51.044 23:53:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 64634 00:09:51.044 23:53:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:09:51.044 23:53:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:51.044 23:53:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64634 00:09:51.044 killing process with pid 64634 00:09:51.044 23:53:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:51.044 23:53:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:51.044 23:53:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64634' 00:09:51.044 23:53:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 64634 00:09:51.045 23:53:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 64634 00:09:51.982 [2024-11-18 23:53:58.646311] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:09:52.241 23:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:52.241 23:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:52.241 23:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:52.241 23:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:09:52.241 23:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:09:52.241 23:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:52.241 23:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:09:52.241 23:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:52.241 23:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:52.241 23:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:52.241 23:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:52.241 23:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:52.241 23:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:52.241 23:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:52.241 23:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:52.241 23:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:52.241 23:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:52.241 23:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:52.241 23:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:52.241 23:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:52.241 23:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:52.500 23:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:52.500 23:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:52.500 23:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:52.500 23:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:52.500 23:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:52.500 23:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@300 -- # return 0 00:09:52.500 23:53:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:09:52.500 00:09:52.500 real 0m8.320s 00:09:52.500 user 0m30.985s 00:09:52.500 sys 0m1.709s 00:09:52.500 ************************************ 00:09:52.500 END TEST nvmf_host_management 00:09:52.500 ************************************ 00:09:52.500 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:52.500 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:52.500 23:53:59 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:09:52.500 23:53:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:52.500 23:53:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:52.500 23:53:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:52.500 ************************************ 00:09:52.500 START TEST nvmf_lvol 00:09:52.500 ************************************ 00:09:52.500 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:09:52.500 * Looking for test storage... 00:09:52.500 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:52.501 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:52.501 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:09:52.501 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:52.761 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:52.761 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:52.761 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:52.761 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:52.761 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:09:52.761 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:09:52.761 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:09:52.761 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:09:52.761 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:09:52.761 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:09:52.761 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:09:52.761 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:52.761 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:09:52.761 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:09:52.761 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:52.761 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:52.761 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:09:52.761 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:09:52.761 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:52.761 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:09:52.761 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:09:52.761 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:09:52.761 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:09:52.761 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:52.761 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:09:52.761 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:09:52.761 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:52.761 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:52.761 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:09:52.761 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:52.761 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:52.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.761 --rc genhtml_branch_coverage=1 00:09:52.761 --rc genhtml_function_coverage=1 00:09:52.761 --rc genhtml_legend=1 00:09:52.761 --rc geninfo_all_blocks=1 00:09:52.761 --rc geninfo_unexecuted_blocks=1 00:09:52.761 00:09:52.761 ' 00:09:52.761 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:52.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.761 --rc genhtml_branch_coverage=1 00:09:52.761 --rc genhtml_function_coverage=1 00:09:52.761 --rc genhtml_legend=1 00:09:52.761 --rc geninfo_all_blocks=1 00:09:52.761 --rc geninfo_unexecuted_blocks=1 00:09:52.761 00:09:52.761 ' 00:09:52.761 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:52.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.761 --rc genhtml_branch_coverage=1 00:09:52.761 --rc genhtml_function_coverage=1 00:09:52.761 --rc genhtml_legend=1 00:09:52.761 --rc geninfo_all_blocks=1 00:09:52.761 --rc geninfo_unexecuted_blocks=1 00:09:52.761 00:09:52.761 ' 00:09:52.761 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:52.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.761 --rc genhtml_branch_coverage=1 00:09:52.761 --rc genhtml_function_coverage=1 00:09:52.761 --rc genhtml_legend=1 00:09:52.761 --rc geninfo_all_blocks=1 00:09:52.761 --rc geninfo_unexecuted_blocks=1 00:09:52.761 00:09:52.761 ' 00:09:52.761 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:52.761 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:09:52.761 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:52.761 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:52.761 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:52.761 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:52.761 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:52.761 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:52.761 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:52.761 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:52.761 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:52.761 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:52.761 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:09:52.761 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:09:52.761 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:52.761 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:52.761 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:52.761 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:52.761 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:52.761 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:09:52.761 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:52.761 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:52.761 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:52.761 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.762 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.762 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.762 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:09:52.762 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.762 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:09:52.762 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:52.762 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:52.762 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:52.762 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:52.762 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:52.762 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:52.762 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:52.762 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:52.762 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:52.762 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:52.762 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:52.762 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:52.762 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:09:52.762 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:09:52.762 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:52.762 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:09:52.762 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:52.762 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:52.762 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:52.762 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:52.762 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:52.762 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:52.762 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:52.762 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:52.762 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:52.762 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:52.762 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:52.762 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:52.762 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:52.762 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:52.762 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:52.762 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:52.762 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:52.762 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:52.762 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:52.762 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:52.762 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:52.762 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:52.762 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:52.762 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:52.762 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:52.762 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:52.762 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:52.762 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:52.762 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:52.762 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:52.762 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:52.762 Cannot find device "nvmf_init_br" 00:09:52.762 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:09:52.762 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:52.762 Cannot find device "nvmf_init_br2" 00:09:52.762 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:09:52.762 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:52.762 Cannot find device "nvmf_tgt_br" 00:09:52.762 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # true 00:09:52.762 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:52.762 Cannot find device "nvmf_tgt_br2" 00:09:52.762 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # true 00:09:52.762 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:52.762 Cannot find device "nvmf_init_br" 00:09:52.762 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # true 00:09:52.762 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:52.762 Cannot find device "nvmf_init_br2" 00:09:52.762 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # true 00:09:52.762 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:52.762 Cannot find device "nvmf_tgt_br" 00:09:52.762 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # true 00:09:52.762 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:52.762 Cannot find device "nvmf_tgt_br2" 00:09:52.762 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # true 00:09:52.762 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:52.762 Cannot find device "nvmf_br" 00:09:52.762 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # true 00:09:52.762 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:52.762 Cannot find device "nvmf_init_if" 00:09:52.762 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # true 00:09:52.762 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:52.762 Cannot find device "nvmf_init_if2" 00:09:52.762 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # true 00:09:52.762 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:52.762 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:52.762 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # true 00:09:52.762 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:52.762 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:52.762 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # true 00:09:52.762 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:52.762 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:52.762 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:53.021 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:53.021 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:53.021 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:53.021 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:53.021 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:53.021 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:53.021 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:53.021 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:53.021 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:53.021 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:53.021 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:53.021 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:53.021 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:53.021 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:53.021 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:53.021 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:53.021 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:53.021 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:53.021 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:53.021 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:53.021 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:53.021 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:53.021 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:53.021 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:53.021 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:53.021 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:53.021 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:53.021 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:53.021 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:53.021 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:53.021 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:53.021 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 00:09:53.021 00:09:53.021 --- 10.0.0.3 ping statistics --- 00:09:53.021 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:53.021 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:09:53.021 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:53.021 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:53.021 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.060 ms 00:09:53.021 00:09:53.021 --- 10.0.0.4 ping statistics --- 00:09:53.021 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:53.021 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:09:53.021 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:53.021 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:53.021 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:09:53.021 00:09:53.021 --- 10.0.0.1 ping statistics --- 00:09:53.021 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:53.021 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:09:53.021 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:53.021 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:53.021 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.096 ms 00:09:53.021 00:09:53.021 --- 10.0.0.2 ping statistics --- 00:09:53.021 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:53.021 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:09:53.021 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:53.021 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@461 -- # return 0 00:09:53.021 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:53.021 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:53.021 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:53.021 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:53.021 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:53.021 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:53.021 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:53.021 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:09:53.021 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:53.021 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:53.021 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:53.021 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=65029 00:09:53.022 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:09:53.022 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 65029 00:09:53.022 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 65029 ']' 00:09:53.022 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:53.022 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:53.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:53.022 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:53.022 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:53.022 23:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:53.281 [2024-11-18 23:53:59.814942] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:09:53.281 [2024-11-18 23:53:59.815643] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:53.540 [2024-11-18 23:54:00.005953] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:53.540 [2024-11-18 23:54:00.135464] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:53.540 [2024-11-18 23:54:00.135538] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:53.540 [2024-11-18 23:54:00.135561] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:53.540 [2024-11-18 23:54:00.135577] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:53.540 [2024-11-18 23:54:00.135611] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:53.540 [2024-11-18 23:54:00.137729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:53.540 [2024-11-18 23:54:00.137785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:53.540 [2024-11-18 23:54:00.137779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:53.799 [2024-11-18 23:54:00.357758] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:54.059 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:54.059 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:09:54.059 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:54.059 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:54.059 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:54.318 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:54.318 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:54.318 [2024-11-18 23:54:01.004283] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:54.576 23:54:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:54.835 23:54:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:09:54.835 23:54:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:55.094 23:54:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:09:55.094 23:54:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:09:55.353 23:54:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:09:55.921 23:54:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=e0d6614b-f290-4ce2-9c5d-b44a31aaaed1 00:09:55.921 23:54:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u e0d6614b-f290-4ce2-9c5d-b44a31aaaed1 lvol 20 00:09:56.180 23:54:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=ec7f7cef-0dde-4268-8140-911c3a3ac3bd 00:09:56.180 23:54:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:56.439 23:54:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 ec7f7cef-0dde-4268-8140-911c3a3ac3bd 00:09:56.698 23:54:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:09:56.957 [2024-11-18 23:54:03.438900] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:56.957 23:54:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:09:57.217 23:54:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=65110 00:09:57.217 23:54:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:09:57.217 23:54:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:09:58.155 23:54:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot ec7f7cef-0dde-4268-8140-911c3a3ac3bd MY_SNAPSHOT 00:09:58.414 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=eb1ef157-fbd4-42d9-bea7-5becade41466 00:09:58.414 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize ec7f7cef-0dde-4268-8140-911c3a3ac3bd 30 00:09:58.983 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone eb1ef157-fbd4-42d9-bea7-5becade41466 MY_CLONE 00:09:59.242 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=12b439b9-1cc6-4bc1-adc6-ed9119c119ff 00:09:59.242 23:54:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 12b439b9-1cc6-4bc1-adc6-ed9119c119ff 00:09:59.810 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 65110 00:10:07.930 Initializing NVMe Controllers 00:10:07.930 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:10:07.930 Controller IO queue size 128, less than required. 00:10:07.930 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:07.930 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:10:07.930 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:10:07.930 Initialization complete. Launching workers. 00:10:07.930 ======================================================== 00:10:07.930 Latency(us) 00:10:07.930 Device Information : IOPS MiB/s Average min max 00:10:07.930 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 9215.80 36.00 13898.17 306.30 193689.48 00:10:07.930 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 9058.40 35.38 14131.98 3804.32 204341.41 00:10:07.930 ======================================================== 00:10:07.930 Total : 18274.20 71.38 14014.07 306.30 204341.41 00:10:07.930 00:10:07.930 23:54:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:07.930 23:54:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete ec7f7cef-0dde-4268-8140-911c3a3ac3bd 00:10:08.189 23:54:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e0d6614b-f290-4ce2-9c5d-b44a31aaaed1 00:10:08.449 23:54:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:10:08.449 23:54:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:10:08.449 23:54:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:10:08.449 23:54:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:08.449 23:54:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:10:08.449 23:54:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:08.449 23:54:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:10:08.449 23:54:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:08.449 23:54:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:08.449 rmmod nvme_tcp 00:10:08.449 rmmod nvme_fabrics 00:10:08.449 rmmod nvme_keyring 00:10:08.449 23:54:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:08.449 23:54:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:10:08.449 23:54:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:10:08.449 23:54:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 65029 ']' 00:10:08.449 23:54:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 65029 00:10:08.449 23:54:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 65029 ']' 00:10:08.449 23:54:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 65029 00:10:08.449 23:54:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:10:08.449 23:54:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:08.449 23:54:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65029 00:10:08.449 23:54:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:08.449 killing process with pid 65029 00:10:08.449 23:54:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:08.449 23:54:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65029' 00:10:08.449 23:54:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 65029 00:10:08.449 23:54:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 65029 00:10:09.827 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:09.827 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:09.827 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:09.827 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:10:09.827 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:09.827 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:10:09.827 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:10:09.827 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:09.827 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:09.827 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:09.827 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:09.827 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:09.827 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:09.827 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:09.827 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:09.827 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:09.827 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:09.827 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:09.827 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:09.827 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:09.827 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:09.827 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:09.827 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:09.827 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:09.827 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:09.827 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:09.827 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@300 -- # return 0 00:10:09.827 00:10:09.827 real 0m17.446s 00:10:09.827 user 1m9.449s 00:10:09.827 sys 0m4.211s 00:10:09.827 ************************************ 00:10:09.827 END TEST nvmf_lvol 00:10:09.827 ************************************ 00:10:09.827 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:09.827 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:10.087 23:54:16 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:10:10.087 23:54:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:10.087 23:54:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:10.087 23:54:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:10.087 ************************************ 00:10:10.087 START TEST nvmf_lvs_grow 00:10:10.087 ************************************ 00:10:10.087 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:10:10.087 * Looking for test storage... 00:10:10.087 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:10.087 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:10.087 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:10:10.087 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:10.088 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:10.088 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:10.088 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:10.088 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:10.088 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:10:10.088 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:10:10.088 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:10:10.088 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:10:10.088 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:10:10.088 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:10:10.088 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:10:10.088 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:10.088 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:10:10.088 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:10:10.088 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:10.088 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:10.088 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:10:10.088 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:10:10.088 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:10.088 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:10:10.088 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:10:10.088 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:10:10.088 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:10:10.088 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:10.088 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:10:10.088 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:10:10.088 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:10.088 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:10.088 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:10:10.088 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:10.088 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:10.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.088 --rc genhtml_branch_coverage=1 00:10:10.088 --rc genhtml_function_coverage=1 00:10:10.088 --rc genhtml_legend=1 00:10:10.088 --rc geninfo_all_blocks=1 00:10:10.088 --rc geninfo_unexecuted_blocks=1 00:10:10.088 00:10:10.088 ' 00:10:10.088 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:10.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.088 --rc genhtml_branch_coverage=1 00:10:10.088 --rc genhtml_function_coverage=1 00:10:10.088 --rc genhtml_legend=1 00:10:10.088 --rc geninfo_all_blocks=1 00:10:10.088 --rc geninfo_unexecuted_blocks=1 00:10:10.088 00:10:10.088 ' 00:10:10.088 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:10.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.088 --rc genhtml_branch_coverage=1 00:10:10.088 --rc genhtml_function_coverage=1 00:10:10.088 --rc genhtml_legend=1 00:10:10.088 --rc geninfo_all_blocks=1 00:10:10.088 --rc geninfo_unexecuted_blocks=1 00:10:10.088 00:10:10.088 ' 00:10:10.088 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:10.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.088 --rc genhtml_branch_coverage=1 00:10:10.088 --rc genhtml_function_coverage=1 00:10:10.088 --rc genhtml_legend=1 00:10:10.088 --rc geninfo_all_blocks=1 00:10:10.088 --rc geninfo_unexecuted_blocks=1 00:10:10.088 00:10:10.088 ' 00:10:10.088 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:10.088 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:10:10.088 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:10.088 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:10.088 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:10.088 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:10.088 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:10.088 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:10.088 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:10.088 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:10.088 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:10.088 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:10.088 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:10:10.088 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:10:10.088 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:10.088 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:10.088 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:10.088 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:10.088 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:10.088 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:10:10.088 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:10.088 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:10.088 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:10.089 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.089 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.089 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.089 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:10:10.089 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.089 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:10:10.089 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:10.089 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:10.089 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:10.089 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:10.089 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:10.089 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:10.089 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:10.089 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:10.089 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:10.089 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:10.089 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:10.089 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:10.089 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:10:10.089 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:10.089 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:10.089 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:10.089 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:10.089 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:10.089 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:10.089 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:10.089 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:10.089 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:10.089 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:10.089 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:10.089 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:10.089 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:10.089 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:10.089 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:10.089 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:10.089 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:10.089 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:10.089 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:10.089 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:10.089 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:10.089 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:10.089 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:10.089 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:10.089 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:10.089 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:10.089 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:10.089 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:10.089 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:10.089 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:10.089 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:10.348 Cannot find device "nvmf_init_br" 00:10:10.348 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:10:10.348 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:10.348 Cannot find device "nvmf_init_br2" 00:10:10.348 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:10:10.348 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:10.348 Cannot find device "nvmf_tgt_br" 00:10:10.348 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # true 00:10:10.348 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:10.348 Cannot find device "nvmf_tgt_br2" 00:10:10.348 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # true 00:10:10.348 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:10.348 Cannot find device "nvmf_init_br" 00:10:10.348 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # true 00:10:10.348 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:10.348 Cannot find device "nvmf_init_br2" 00:10:10.348 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # true 00:10:10.348 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:10.348 Cannot find device "nvmf_tgt_br" 00:10:10.348 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # true 00:10:10.348 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:10.348 Cannot find device "nvmf_tgt_br2" 00:10:10.348 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # true 00:10:10.348 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:10.348 Cannot find device "nvmf_br" 00:10:10.348 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # true 00:10:10.348 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:10.348 Cannot find device "nvmf_init_if" 00:10:10.348 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # true 00:10:10.348 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:10.348 Cannot find device "nvmf_init_if2" 00:10:10.348 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # true 00:10:10.348 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:10.348 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:10.348 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # true 00:10:10.348 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:10.348 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:10.348 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # true 00:10:10.348 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:10.348 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:10.348 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:10.348 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:10.348 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:10.348 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:10.348 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:10.348 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:10.348 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:10.348 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:10.348 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:10.348 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:10.348 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:10.348 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:10.348 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:10.348 23:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:10.348 23:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:10.348 23:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:10.348 23:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:10.348 23:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:10.348 23:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:10.607 23:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:10.607 23:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:10.607 23:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:10.607 23:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:10.607 23:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:10.607 23:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:10.607 23:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:10.607 23:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:10.607 23:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:10.607 23:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:10.607 23:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:10.607 23:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:10.607 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:10.607 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:10:10.607 00:10:10.607 --- 10.0.0.3 ping statistics --- 00:10:10.607 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:10.607 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:10:10.607 23:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:10.607 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:10.607 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.050 ms 00:10:10.607 00:10:10.607 --- 10.0.0.4 ping statistics --- 00:10:10.607 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:10.607 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:10:10.607 23:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:10.607 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:10.607 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:10:10.607 00:10:10.607 --- 10.0.0.1 ping statistics --- 00:10:10.607 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:10.607 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:10:10.607 23:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:10.607 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:10.607 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:10:10.607 00:10:10.607 --- 10.0.0.2 ping statistics --- 00:10:10.607 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:10.607 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:10:10.607 23:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:10.607 23:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@461 -- # return 0 00:10:10.607 23:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:10.607 23:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:10.607 23:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:10.607 23:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:10.607 23:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:10.607 23:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:10.607 23:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:10.607 23:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:10:10.607 23:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:10.607 23:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:10.607 23:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:10.607 23:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:10:10.607 23:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=65507 00:10:10.608 23:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 65507 00:10:10.608 23:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 65507 ']' 00:10:10.608 23:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:10.608 23:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:10.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:10.608 23:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:10.608 23:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:10.608 23:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:10.608 [2024-11-18 23:54:17.284649] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:10:10.608 [2024-11-18 23:54:17.284824] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:10.869 [2024-11-18 23:54:17.467931] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:11.146 [2024-11-18 23:54:17.561306] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:11.146 [2024-11-18 23:54:17.561612] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:11.146 [2024-11-18 23:54:17.561726] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:11.146 [2024-11-18 23:54:17.561853] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:11.146 [2024-11-18 23:54:17.561973] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:11.146 [2024-11-18 23:54:17.563242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:11.146 [2024-11-18 23:54:17.736967] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:11.723 23:54:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:11.723 23:54:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:10:11.723 23:54:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:11.723 23:54:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:11.723 23:54:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:11.723 23:54:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:11.723 23:54:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:11.982 [2024-11-18 23:54:18.579425] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:11.982 23:54:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:10:11.982 23:54:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:11.982 23:54:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:11.982 23:54:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:11.982 ************************************ 00:10:11.982 START TEST lvs_grow_clean 00:10:11.982 ************************************ 00:10:11.982 23:54:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:10:11.982 23:54:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:10:11.982 23:54:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:10:11.982 23:54:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:10:11.982 23:54:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:10:11.982 23:54:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:10:11.982 23:54:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:10:11.982 23:54:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:11.982 23:54:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:11.982 23:54:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:12.550 23:54:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:10:12.550 23:54:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:10:12.550 23:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=05503aad-8825-43f7-80ec-59d8ff3deb94 00:10:12.550 23:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:10:12.550 23:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 05503aad-8825-43f7-80ec-59d8ff3deb94 00:10:13.116 23:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:10:13.116 23:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:10:13.116 23:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 05503aad-8825-43f7-80ec-59d8ff3deb94 lvol 150 00:10:13.116 23:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=30029927-642a-4ec6-8827-9aa954c4a939 00:10:13.116 23:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:13.116 23:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:10:13.375 [2024-11-18 23:54:20.006974] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:10:13.375 [2024-11-18 23:54:20.007098] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:10:13.375 true 00:10:13.375 23:54:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 05503aad-8825-43f7-80ec-59d8ff3deb94 00:10:13.375 23:54:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:10:13.635 23:54:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:10:13.635 23:54:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:14.203 23:54:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 30029927-642a-4ec6-8827-9aa954c4a939 00:10:14.203 23:54:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:10:14.462 [2024-11-18 23:54:21.108143] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:14.462 23:54:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:10:14.721 23:54:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=65595 00:10:14.721 23:54:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:10:14.721 23:54:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:14.721 23:54:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 65595 /var/tmp/bdevperf.sock 00:10:14.721 23:54:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 65595 ']' 00:10:14.721 23:54:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:14.721 23:54:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:14.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:14.721 23:54:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:14.721 23:54:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:14.721 23:54:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:10:14.981 [2024-11-18 23:54:21.441529] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:10:14.981 [2024-11-18 23:54:21.441684] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65595 ] 00:10:14.981 [2024-11-18 23:54:21.609113] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:15.240 [2024-11-18 23:54:21.704420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:15.240 [2024-11-18 23:54:21.855560] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:15.809 23:54:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:15.809 23:54:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:10:15.809 23:54:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:10:16.069 Nvme0n1 00:10:16.069 23:54:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:10:16.328 [ 00:10:16.328 { 00:10:16.328 "name": "Nvme0n1", 00:10:16.328 "aliases": [ 00:10:16.328 "30029927-642a-4ec6-8827-9aa954c4a939" 00:10:16.328 ], 00:10:16.328 "product_name": "NVMe disk", 00:10:16.328 "block_size": 4096, 00:10:16.328 "num_blocks": 38912, 00:10:16.328 "uuid": "30029927-642a-4ec6-8827-9aa954c4a939", 00:10:16.328 "numa_id": -1, 00:10:16.328 "assigned_rate_limits": { 00:10:16.328 "rw_ios_per_sec": 0, 00:10:16.328 "rw_mbytes_per_sec": 0, 00:10:16.328 "r_mbytes_per_sec": 0, 00:10:16.328 "w_mbytes_per_sec": 0 00:10:16.328 }, 00:10:16.328 "claimed": false, 00:10:16.328 "zoned": false, 00:10:16.328 "supported_io_types": { 00:10:16.328 "read": true, 00:10:16.328 "write": true, 00:10:16.328 "unmap": true, 00:10:16.328 "flush": true, 00:10:16.328 "reset": true, 00:10:16.328 "nvme_admin": true, 00:10:16.328 "nvme_io": true, 00:10:16.328 "nvme_io_md": false, 00:10:16.328 "write_zeroes": true, 00:10:16.328 "zcopy": false, 00:10:16.328 "get_zone_info": false, 00:10:16.328 "zone_management": false, 00:10:16.328 "zone_append": false, 00:10:16.328 "compare": true, 00:10:16.328 "compare_and_write": true, 00:10:16.328 "abort": true, 00:10:16.328 "seek_hole": false, 00:10:16.328 "seek_data": false, 00:10:16.328 "copy": true, 00:10:16.328 "nvme_iov_md": false 00:10:16.328 }, 00:10:16.328 "memory_domains": [ 00:10:16.328 { 00:10:16.328 "dma_device_id": "system", 00:10:16.328 "dma_device_type": 1 00:10:16.328 } 00:10:16.328 ], 00:10:16.328 "driver_specific": { 00:10:16.328 "nvme": [ 00:10:16.328 { 00:10:16.328 "trid": { 00:10:16.328 "trtype": "TCP", 00:10:16.328 "adrfam": "IPv4", 00:10:16.328 "traddr": "10.0.0.3", 00:10:16.328 "trsvcid": "4420", 00:10:16.328 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:10:16.328 }, 00:10:16.328 "ctrlr_data": { 00:10:16.328 "cntlid": 1, 00:10:16.328 "vendor_id": "0x8086", 00:10:16.328 "model_number": "SPDK bdev Controller", 00:10:16.328 "serial_number": "SPDK0", 00:10:16.328 "firmware_revision": "25.01", 00:10:16.329 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:16.329 "oacs": { 00:10:16.329 "security": 0, 00:10:16.329 "format": 0, 00:10:16.329 "firmware": 0, 00:10:16.329 "ns_manage": 0 00:10:16.329 }, 00:10:16.329 "multi_ctrlr": true, 00:10:16.329 "ana_reporting": false 00:10:16.329 }, 00:10:16.329 "vs": { 00:10:16.329 "nvme_version": "1.3" 00:10:16.329 }, 00:10:16.329 "ns_data": { 00:10:16.329 "id": 1, 00:10:16.329 "can_share": true 00:10:16.329 } 00:10:16.329 } 00:10:16.329 ], 00:10:16.329 "mp_policy": "active_passive" 00:10:16.329 } 00:10:16.329 } 00:10:16.329 ] 00:10:16.329 23:54:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=65619 00:10:16.329 23:54:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:16.329 23:54:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:10:16.588 Running I/O for 10 seconds... 00:10:17.527 Latency(us) 00:10:17.527 [2024-11-18T23:54:24.219Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:17.527 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:17.527 Nvme0n1 : 1.00 5464.00 21.34 0.00 0.00 0.00 0.00 0.00 00:10:17.527 [2024-11-18T23:54:24.219Z] =================================================================================================================== 00:10:17.527 [2024-11-18T23:54:24.219Z] Total : 5464.00 21.34 0.00 0.00 0.00 0.00 0.00 00:10:17.527 00:10:18.464 23:54:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 05503aad-8825-43f7-80ec-59d8ff3deb94 00:10:18.464 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:18.464 Nvme0n1 : 2.00 5526.00 21.59 0.00 0.00 0.00 0.00 0.00 00:10:18.464 [2024-11-18T23:54:25.156Z] =================================================================================================================== 00:10:18.464 [2024-11-18T23:54:25.156Z] Total : 5526.00 21.59 0.00 0.00 0.00 0.00 0.00 00:10:18.464 00:10:18.724 true 00:10:18.724 23:54:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 05503aad-8825-43f7-80ec-59d8ff3deb94 00:10:18.724 23:54:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:10:18.983 23:54:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:10:18.983 23:54:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:10:18.983 23:54:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 65619 00:10:19.552 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:19.552 Nvme0n1 : 3.00 5459.67 21.33 0.00 0.00 0.00 0.00 0.00 00:10:19.552 [2024-11-18T23:54:26.244Z] =================================================================================================================== 00:10:19.552 [2024-11-18T23:54:26.244Z] Total : 5459.67 21.33 0.00 0.00 0.00 0.00 0.00 00:10:19.552 00:10:20.489 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:20.489 Nvme0n1 : 4.00 5428.25 21.20 0.00 0.00 0.00 0.00 0.00 00:10:20.489 [2024-11-18T23:54:27.181Z] =================================================================================================================== 00:10:20.489 [2024-11-18T23:54:27.181Z] Total : 5428.25 21.20 0.00 0.00 0.00 0.00 0.00 00:10:20.489 00:10:21.428 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:21.428 Nvme0n1 : 5.00 5409.40 21.13 0.00 0.00 0.00 0.00 0.00 00:10:21.428 [2024-11-18T23:54:28.120Z] =================================================================================================================== 00:10:21.428 [2024-11-18T23:54:28.120Z] Total : 5409.40 21.13 0.00 0.00 0.00 0.00 0.00 00:10:21.428 00:10:22.804 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:22.804 Nvme0n1 : 6.00 5418.00 21.16 0.00 0.00 0.00 0.00 0.00 00:10:22.804 [2024-11-18T23:54:29.496Z] =================================================================================================================== 00:10:22.804 [2024-11-18T23:54:29.496Z] Total : 5418.00 21.16 0.00 0.00 0.00 0.00 0.00 00:10:22.804 00:10:23.799 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:23.799 Nvme0n1 : 7.00 5406.00 21.12 0.00 0.00 0.00 0.00 0.00 00:10:23.799 [2024-11-18T23:54:30.491Z] =================================================================================================================== 00:10:23.799 [2024-11-18T23:54:30.491Z] Total : 5406.00 21.12 0.00 0.00 0.00 0.00 0.00 00:10:23.799 00:10:24.375 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:24.375 Nvme0n1 : 8.00 5412.88 21.14 0.00 0.00 0.00 0.00 0.00 00:10:24.375 [2024-11-18T23:54:31.067Z] =================================================================================================================== 00:10:24.375 [2024-11-18T23:54:31.067Z] Total : 5412.88 21.14 0.00 0.00 0.00 0.00 0.00 00:10:24.375 00:10:25.752 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:25.752 Nvme0n1 : 9.00 5390.00 21.05 0.00 0.00 0.00 0.00 0.00 00:10:25.752 [2024-11-18T23:54:32.444Z] =================================================================================================================== 00:10:25.752 [2024-11-18T23:54:32.444Z] Total : 5390.00 21.05 0.00 0.00 0.00 0.00 0.00 00:10:25.752 00:10:26.687 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:26.687 Nvme0n1 : 10.00 5384.40 21.03 0.00 0.00 0.00 0.00 0.00 00:10:26.687 [2024-11-18T23:54:33.379Z] =================================================================================================================== 00:10:26.687 [2024-11-18T23:54:33.379Z] Total : 5384.40 21.03 0.00 0.00 0.00 0.00 0.00 00:10:26.687 00:10:26.687 00:10:26.687 Latency(us) 00:10:26.687 [2024-11-18T23:54:33.379Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:26.688 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:26.688 Nvme0n1 : 10.00 5381.85 21.02 0.00 0.00 23773.19 8340.95 75783.45 00:10:26.688 [2024-11-18T23:54:33.380Z] =================================================================================================================== 00:10:26.688 [2024-11-18T23:54:33.380Z] Total : 5381.85 21.02 0.00 0.00 23773.19 8340.95 75783.45 00:10:26.688 { 00:10:26.688 "results": [ 00:10:26.688 { 00:10:26.688 "job": "Nvme0n1", 00:10:26.688 "core_mask": "0x2", 00:10:26.688 "workload": "randwrite", 00:10:26.688 "status": "finished", 00:10:26.688 "queue_depth": 128, 00:10:26.688 "io_size": 4096, 00:10:26.688 "runtime": 10.004922, 00:10:26.688 "iops": 5381.851052911757, 00:10:26.688 "mibps": 21.02285567543655, 00:10:26.688 "io_failed": 0, 00:10:26.688 "io_timeout": 0, 00:10:26.688 "avg_latency_us": 23773.18939406883, 00:10:26.688 "min_latency_us": 8340.945454545454, 00:10:26.688 "max_latency_us": 75783.44727272727 00:10:26.688 } 00:10:26.688 ], 00:10:26.688 "core_count": 1 00:10:26.688 } 00:10:26.688 23:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 65595 00:10:26.688 23:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 65595 ']' 00:10:26.688 23:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 65595 00:10:26.688 23:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:10:26.688 23:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:26.688 23:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65595 00:10:26.688 killing process with pid 65595 00:10:26.688 Received shutdown signal, test time was about 10.000000 seconds 00:10:26.688 00:10:26.688 Latency(us) 00:10:26.688 [2024-11-18T23:54:33.380Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:26.688 [2024-11-18T23:54:33.380Z] =================================================================================================================== 00:10:26.688 [2024-11-18T23:54:33.380Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:26.688 23:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:26.688 23:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:26.688 23:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65595' 00:10:26.688 23:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 65595 00:10:26.688 23:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 65595 00:10:27.256 23:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:10:27.514 23:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:28.081 23:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 05503aad-8825-43f7-80ec-59d8ff3deb94 00:10:28.081 23:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:10:28.081 23:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:10:28.081 23:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:10:28.081 23:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:28.340 [2024-11-18 23:54:34.984565] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:10:28.340 23:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 05503aad-8825-43f7-80ec-59d8ff3deb94 00:10:28.340 23:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:10:28.340 23:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 05503aad-8825-43f7-80ec-59d8ff3deb94 00:10:28.341 23:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:28.341 23:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:28.341 23:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:28.341 23:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:28.341 23:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:28.341 23:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:28.341 23:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:28.341 23:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:10:28.341 23:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 05503aad-8825-43f7-80ec-59d8ff3deb94 00:10:28.908 request: 00:10:28.908 { 00:10:28.908 "uuid": "05503aad-8825-43f7-80ec-59d8ff3deb94", 00:10:28.908 "method": "bdev_lvol_get_lvstores", 00:10:28.908 "req_id": 1 00:10:28.908 } 00:10:28.908 Got JSON-RPC error response 00:10:28.908 response: 00:10:28.908 { 00:10:28.908 "code": -19, 00:10:28.908 "message": "No such device" 00:10:28.908 } 00:10:28.908 23:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:10:28.908 23:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:28.908 23:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:28.908 23:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:28.908 23:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:28.908 aio_bdev 00:10:28.908 23:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 30029927-642a-4ec6-8827-9aa954c4a939 00:10:28.908 23:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=30029927-642a-4ec6-8827-9aa954c4a939 00:10:28.908 23:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:28.908 23:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:10:28.908 23:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:28.908 23:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:28.908 23:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:29.167 23:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 30029927-642a-4ec6-8827-9aa954c4a939 -t 2000 00:10:29.425 [ 00:10:29.425 { 00:10:29.425 "name": "30029927-642a-4ec6-8827-9aa954c4a939", 00:10:29.425 "aliases": [ 00:10:29.425 "lvs/lvol" 00:10:29.425 ], 00:10:29.425 "product_name": "Logical Volume", 00:10:29.425 "block_size": 4096, 00:10:29.425 "num_blocks": 38912, 00:10:29.425 "uuid": "30029927-642a-4ec6-8827-9aa954c4a939", 00:10:29.425 "assigned_rate_limits": { 00:10:29.425 "rw_ios_per_sec": 0, 00:10:29.425 "rw_mbytes_per_sec": 0, 00:10:29.425 "r_mbytes_per_sec": 0, 00:10:29.425 "w_mbytes_per_sec": 0 00:10:29.425 }, 00:10:29.425 "claimed": false, 00:10:29.425 "zoned": false, 00:10:29.425 "supported_io_types": { 00:10:29.425 "read": true, 00:10:29.425 "write": true, 00:10:29.425 "unmap": true, 00:10:29.425 "flush": false, 00:10:29.425 "reset": true, 00:10:29.425 "nvme_admin": false, 00:10:29.425 "nvme_io": false, 00:10:29.425 "nvme_io_md": false, 00:10:29.425 "write_zeroes": true, 00:10:29.426 "zcopy": false, 00:10:29.426 "get_zone_info": false, 00:10:29.426 "zone_management": false, 00:10:29.426 "zone_append": false, 00:10:29.426 "compare": false, 00:10:29.426 "compare_and_write": false, 00:10:29.426 "abort": false, 00:10:29.426 "seek_hole": true, 00:10:29.426 "seek_data": true, 00:10:29.426 "copy": false, 00:10:29.426 "nvme_iov_md": false 00:10:29.426 }, 00:10:29.426 "driver_specific": { 00:10:29.426 "lvol": { 00:10:29.426 "lvol_store_uuid": "05503aad-8825-43f7-80ec-59d8ff3deb94", 00:10:29.426 "base_bdev": "aio_bdev", 00:10:29.426 "thin_provision": false, 00:10:29.426 "num_allocated_clusters": 38, 00:10:29.426 "snapshot": false, 00:10:29.426 "clone": false, 00:10:29.426 "esnap_clone": false 00:10:29.426 } 00:10:29.426 } 00:10:29.426 } 00:10:29.426 ] 00:10:29.426 23:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:10:29.426 23:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 05503aad-8825-43f7-80ec-59d8ff3deb94 00:10:29.426 23:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:10:29.684 23:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:10:29.684 23:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:10:29.684 23:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 05503aad-8825-43f7-80ec-59d8ff3deb94 00:10:29.943 23:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:10:29.943 23:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 30029927-642a-4ec6-8827-9aa954c4a939 00:10:30.202 23:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 05503aad-8825-43f7-80ec-59d8ff3deb94 00:10:30.461 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:30.719 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:30.977 ************************************ 00:10:30.977 END TEST lvs_grow_clean 00:10:30.977 ************************************ 00:10:30.977 00:10:30.977 real 0m19.051s 00:10:30.977 user 0m18.166s 00:10:30.977 sys 0m2.401s 00:10:30.977 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:30.977 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:10:31.235 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:10:31.235 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:31.235 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:31.235 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:31.235 ************************************ 00:10:31.235 START TEST lvs_grow_dirty 00:10:31.235 ************************************ 00:10:31.235 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:10:31.235 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:10:31.235 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:10:31.235 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:10:31.235 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:10:31.235 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:10:31.235 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:10:31.235 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:31.235 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:31.235 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:31.493 23:54:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:10:31.493 23:54:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:10:31.752 23:54:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=49996c7b-b1e7-4603-90b9-73042b47e102 00:10:31.752 23:54:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 49996c7b-b1e7-4603-90b9-73042b47e102 00:10:31.752 23:54:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:10:32.011 23:54:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:10:32.011 23:54:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:10:32.011 23:54:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 49996c7b-b1e7-4603-90b9-73042b47e102 lvol 150 00:10:32.269 23:54:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=6fd2d516-2242-4122-84fa-484034a46752 00:10:32.269 23:54:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:32.269 23:54:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:10:32.528 [2024-11-18 23:54:39.055889] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:10:32.528 [2024-11-18 23:54:39.056025] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:10:32.528 true 00:10:32.528 23:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 49996c7b-b1e7-4603-90b9-73042b47e102 00:10:32.528 23:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:10:32.786 23:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:10:32.786 23:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:33.045 23:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 6fd2d516-2242-4122-84fa-484034a46752 00:10:33.304 23:54:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:10:33.563 [2024-11-18 23:54:39.992727] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:33.563 23:54:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:10:33.822 23:54:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=65875 00:10:33.822 23:54:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:10:33.822 23:54:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:33.822 23:54:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 65875 /var/tmp/bdevperf.sock 00:10:33.822 23:54:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 65875 ']' 00:10:33.822 23:54:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:33.822 23:54:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:33.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:33.823 23:54:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:33.823 23:54:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:33.823 23:54:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:33.823 [2024-11-18 23:54:40.380687] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:10:33.823 [2024-11-18 23:54:40.380815] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65875 ] 00:10:34.082 [2024-11-18 23:54:40.556177] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:34.082 [2024-11-18 23:54:40.676733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:34.340 [2024-11-18 23:54:40.841811] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:34.908 23:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:34.908 23:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:10:34.908 23:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:10:35.166 Nvme0n1 00:10:35.166 23:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:10:35.424 [ 00:10:35.424 { 00:10:35.424 "name": "Nvme0n1", 00:10:35.424 "aliases": [ 00:10:35.424 "6fd2d516-2242-4122-84fa-484034a46752" 00:10:35.424 ], 00:10:35.424 "product_name": "NVMe disk", 00:10:35.424 "block_size": 4096, 00:10:35.424 "num_blocks": 38912, 00:10:35.424 "uuid": "6fd2d516-2242-4122-84fa-484034a46752", 00:10:35.424 "numa_id": -1, 00:10:35.424 "assigned_rate_limits": { 00:10:35.424 "rw_ios_per_sec": 0, 00:10:35.424 "rw_mbytes_per_sec": 0, 00:10:35.424 "r_mbytes_per_sec": 0, 00:10:35.424 "w_mbytes_per_sec": 0 00:10:35.424 }, 00:10:35.425 "claimed": false, 00:10:35.425 "zoned": false, 00:10:35.425 "supported_io_types": { 00:10:35.425 "read": true, 00:10:35.425 "write": true, 00:10:35.425 "unmap": true, 00:10:35.425 "flush": true, 00:10:35.425 "reset": true, 00:10:35.425 "nvme_admin": true, 00:10:35.425 "nvme_io": true, 00:10:35.425 "nvme_io_md": false, 00:10:35.425 "write_zeroes": true, 00:10:35.425 "zcopy": false, 00:10:35.425 "get_zone_info": false, 00:10:35.425 "zone_management": false, 00:10:35.425 "zone_append": false, 00:10:35.425 "compare": true, 00:10:35.425 "compare_and_write": true, 00:10:35.425 "abort": true, 00:10:35.425 "seek_hole": false, 00:10:35.425 "seek_data": false, 00:10:35.425 "copy": true, 00:10:35.425 "nvme_iov_md": false 00:10:35.425 }, 00:10:35.425 "memory_domains": [ 00:10:35.425 { 00:10:35.425 "dma_device_id": "system", 00:10:35.425 "dma_device_type": 1 00:10:35.425 } 00:10:35.425 ], 00:10:35.425 "driver_specific": { 00:10:35.425 "nvme": [ 00:10:35.425 { 00:10:35.425 "trid": { 00:10:35.425 "trtype": "TCP", 00:10:35.425 "adrfam": "IPv4", 00:10:35.425 "traddr": "10.0.0.3", 00:10:35.425 "trsvcid": "4420", 00:10:35.425 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:10:35.425 }, 00:10:35.425 "ctrlr_data": { 00:10:35.425 "cntlid": 1, 00:10:35.425 "vendor_id": "0x8086", 00:10:35.425 "model_number": "SPDK bdev Controller", 00:10:35.425 "serial_number": "SPDK0", 00:10:35.425 "firmware_revision": "25.01", 00:10:35.425 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:35.425 "oacs": { 00:10:35.425 "security": 0, 00:10:35.425 "format": 0, 00:10:35.425 "firmware": 0, 00:10:35.425 "ns_manage": 0 00:10:35.425 }, 00:10:35.425 "multi_ctrlr": true, 00:10:35.425 "ana_reporting": false 00:10:35.425 }, 00:10:35.425 "vs": { 00:10:35.425 "nvme_version": "1.3" 00:10:35.425 }, 00:10:35.425 "ns_data": { 00:10:35.425 "id": 1, 00:10:35.425 "can_share": true 00:10:35.425 } 00:10:35.425 } 00:10:35.425 ], 00:10:35.425 "mp_policy": "active_passive" 00:10:35.425 } 00:10:35.425 } 00:10:35.425 ] 00:10:35.425 23:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=65898 00:10:35.425 23:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:35.425 23:54:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:10:35.425 Running I/O for 10 seconds... 00:10:36.803 Latency(us) 00:10:36.803 [2024-11-18T23:54:43.495Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:36.803 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:36.803 Nvme0n1 : 1.00 5334.00 20.84 0.00 0.00 0.00 0.00 0.00 00:10:36.803 [2024-11-18T23:54:43.495Z] =================================================================================================================== 00:10:36.803 [2024-11-18T23:54:43.495Z] Total : 5334.00 20.84 0.00 0.00 0.00 0.00 0.00 00:10:36.803 00:10:37.371 23:54:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 49996c7b-b1e7-4603-90b9-73042b47e102 00:10:37.629 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:37.629 Nvme0n1 : 2.00 5270.50 20.59 0.00 0.00 0.00 0.00 0.00 00:10:37.629 [2024-11-18T23:54:44.321Z] =================================================================================================================== 00:10:37.629 [2024-11-18T23:54:44.322Z] Total : 5270.50 20.59 0.00 0.00 0.00 0.00 0.00 00:10:37.630 00:10:37.630 true 00:10:37.630 23:54:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 49996c7b-b1e7-4603-90b9-73042b47e102 00:10:37.630 23:54:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:10:38.220 23:54:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:10:38.220 23:54:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:10:38.220 23:54:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 65898 00:10:38.479 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:38.479 Nvme0n1 : 3.00 5334.00 20.84 0.00 0.00 0.00 0.00 0.00 00:10:38.479 [2024-11-18T23:54:45.171Z] =================================================================================================================== 00:10:38.479 [2024-11-18T23:54:45.171Z] Total : 5334.00 20.84 0.00 0.00 0.00 0.00 0.00 00:10:38.479 00:10:39.416 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:39.416 Nvme0n1 : 4.00 5397.50 21.08 0.00 0.00 0.00 0.00 0.00 00:10:39.416 [2024-11-18T23:54:46.108Z] =================================================================================================================== 00:10:39.416 [2024-11-18T23:54:46.108Z] Total : 5397.50 21.08 0.00 0.00 0.00 0.00 0.00 00:10:39.416 00:10:40.793 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:40.793 Nvme0n1 : 5.00 5297.60 20.69 0.00 0.00 0.00 0.00 0.00 00:10:40.793 [2024-11-18T23:54:47.485Z] =================================================================================================================== 00:10:40.793 [2024-11-18T23:54:47.485Z] Total : 5297.60 20.69 0.00 0.00 0.00 0.00 0.00 00:10:40.793 00:10:41.730 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:41.730 Nvme0n1 : 6.00 5282.50 20.63 0.00 0.00 0.00 0.00 0.00 00:10:41.730 [2024-11-18T23:54:48.422Z] =================================================================================================================== 00:10:41.730 [2024-11-18T23:54:48.422Z] Total : 5282.50 20.63 0.00 0.00 0.00 0.00 0.00 00:10:41.730 00:10:42.679 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:42.679 Nvme0n1 : 7.00 5253.57 20.52 0.00 0.00 0.00 0.00 0.00 00:10:42.679 [2024-11-18T23:54:49.371Z] =================================================================================================================== 00:10:42.679 [2024-11-18T23:54:49.371Z] Total : 5253.57 20.52 0.00 0.00 0.00 0.00 0.00 00:10:42.679 00:10:43.619 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:43.619 Nvme0n1 : 8.00 5216.00 20.38 0.00 0.00 0.00 0.00 0.00 00:10:43.619 [2024-11-18T23:54:50.311Z] =================================================================================================================== 00:10:43.619 [2024-11-18T23:54:50.311Z] Total : 5216.00 20.38 0.00 0.00 0.00 0.00 0.00 00:10:43.619 00:10:44.556 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:44.556 Nvme0n1 : 9.00 5215.00 20.37 0.00 0.00 0.00 0.00 0.00 00:10:44.556 [2024-11-18T23:54:51.248Z] =================================================================================================================== 00:10:44.556 [2024-11-18T23:54:51.248Z] Total : 5215.00 20.37 0.00 0.00 0.00 0.00 0.00 00:10:44.556 00:10:45.492 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:45.492 Nvme0n1 : 10.00 5239.60 20.47 0.00 0.00 0.00 0.00 0.00 00:10:45.492 [2024-11-18T23:54:52.184Z] =================================================================================================================== 00:10:45.492 [2024-11-18T23:54:52.184Z] Total : 5239.60 20.47 0.00 0.00 0.00 0.00 0.00 00:10:45.492 00:10:45.492 00:10:45.492 Latency(us) 00:10:45.492 [2024-11-18T23:54:52.184Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:45.492 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:45.492 Nvme0n1 : 10.03 5239.23 20.47 0.00 0.00 24423.09 17277.67 96278.34 00:10:45.492 [2024-11-18T23:54:52.184Z] =================================================================================================================== 00:10:45.492 [2024-11-18T23:54:52.184Z] Total : 5239.23 20.47 0.00 0.00 24423.09 17277.67 96278.34 00:10:45.492 { 00:10:45.492 "results": [ 00:10:45.492 { 00:10:45.492 "job": "Nvme0n1", 00:10:45.492 "core_mask": "0x2", 00:10:45.492 "workload": "randwrite", 00:10:45.492 "status": "finished", 00:10:45.492 "queue_depth": 128, 00:10:45.492 "io_size": 4096, 00:10:45.492 "runtime": 10.025128, 00:10:45.492 "iops": 5239.234850667243, 00:10:45.492 "mibps": 20.46576113541892, 00:10:45.492 "io_failed": 0, 00:10:45.492 "io_timeout": 0, 00:10:45.492 "avg_latency_us": 24423.09015307288, 00:10:45.492 "min_latency_us": 17277.672727272726, 00:10:45.492 "max_latency_us": 96278.34181818181 00:10:45.492 } 00:10:45.492 ], 00:10:45.492 "core_count": 1 00:10:45.492 } 00:10:45.492 23:54:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 65875 00:10:45.492 23:54:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 65875 ']' 00:10:45.492 23:54:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 65875 00:10:45.492 23:54:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:10:45.492 23:54:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:45.492 23:54:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65875 00:10:45.492 killing process with pid 65875 00:10:45.492 Received shutdown signal, test time was about 10.000000 seconds 00:10:45.492 00:10:45.492 Latency(us) 00:10:45.492 [2024-11-18T23:54:52.184Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:45.492 [2024-11-18T23:54:52.184Z] =================================================================================================================== 00:10:45.492 [2024-11-18T23:54:52.184Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:45.492 23:54:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:45.492 23:54:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:45.492 23:54:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65875' 00:10:45.492 23:54:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 65875 00:10:45.492 23:54:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 65875 00:10:46.430 23:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:10:46.688 23:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:46.946 23:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 49996c7b-b1e7-4603-90b9-73042b47e102 00:10:46.946 23:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:10:47.205 23:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:10:47.205 23:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:10:47.205 23:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 65507 00:10:47.205 23:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 65507 00:10:47.464 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 65507 Killed "${NVMF_APP[@]}" "$@" 00:10:47.464 23:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:10:47.464 23:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:10:47.464 23:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:47.464 23:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:47.464 23:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:47.464 23:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:10:47.464 23:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=66043 00:10:47.464 23:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 66043 00:10:47.464 23:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 66043 ']' 00:10:47.464 23:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:47.464 23:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:47.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:47.464 23:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:47.464 23:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:47.464 23:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:47.464 [2024-11-18 23:54:54.050920] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:10:47.464 [2024-11-18 23:54:54.051079] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:47.723 [2024-11-18 23:54:54.227720] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:47.723 [2024-11-18 23:54:54.314179] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:47.723 [2024-11-18 23:54:54.314251] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:47.723 [2024-11-18 23:54:54.314268] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:47.723 [2024-11-18 23:54:54.314291] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:47.723 [2024-11-18 23:54:54.314304] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:47.723 [2024-11-18 23:54:54.315373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:47.982 [2024-11-18 23:54:54.488934] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:48.550 23:54:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:48.550 23:54:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:10:48.550 23:54:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:48.550 23:54:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:48.550 23:54:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:48.550 23:54:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:48.550 23:54:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:48.810 [2024-11-18 23:54:55.293279] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:10:48.810 [2024-11-18 23:54:55.293624] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:10:48.810 [2024-11-18 23:54:55.293864] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:10:48.810 23:54:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:10:48.810 23:54:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 6fd2d516-2242-4122-84fa-484034a46752 00:10:48.810 23:54:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=6fd2d516-2242-4122-84fa-484034a46752 00:10:48.810 23:54:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:48.810 23:54:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:10:48.810 23:54:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:48.810 23:54:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:48.810 23:54:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:49.069 23:54:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6fd2d516-2242-4122-84fa-484034a46752 -t 2000 00:10:49.328 [ 00:10:49.328 { 00:10:49.328 "name": "6fd2d516-2242-4122-84fa-484034a46752", 00:10:49.328 "aliases": [ 00:10:49.328 "lvs/lvol" 00:10:49.328 ], 00:10:49.328 "product_name": "Logical Volume", 00:10:49.328 "block_size": 4096, 00:10:49.328 "num_blocks": 38912, 00:10:49.328 "uuid": "6fd2d516-2242-4122-84fa-484034a46752", 00:10:49.328 "assigned_rate_limits": { 00:10:49.328 "rw_ios_per_sec": 0, 00:10:49.328 "rw_mbytes_per_sec": 0, 00:10:49.328 "r_mbytes_per_sec": 0, 00:10:49.328 "w_mbytes_per_sec": 0 00:10:49.328 }, 00:10:49.328 "claimed": false, 00:10:49.328 "zoned": false, 00:10:49.328 "supported_io_types": { 00:10:49.328 "read": true, 00:10:49.328 "write": true, 00:10:49.328 "unmap": true, 00:10:49.328 "flush": false, 00:10:49.328 "reset": true, 00:10:49.328 "nvme_admin": false, 00:10:49.328 "nvme_io": false, 00:10:49.328 "nvme_io_md": false, 00:10:49.328 "write_zeroes": true, 00:10:49.328 "zcopy": false, 00:10:49.328 "get_zone_info": false, 00:10:49.328 "zone_management": false, 00:10:49.328 "zone_append": false, 00:10:49.328 "compare": false, 00:10:49.328 "compare_and_write": false, 00:10:49.328 "abort": false, 00:10:49.328 "seek_hole": true, 00:10:49.328 "seek_data": true, 00:10:49.328 "copy": false, 00:10:49.328 "nvme_iov_md": false 00:10:49.328 }, 00:10:49.328 "driver_specific": { 00:10:49.328 "lvol": { 00:10:49.328 "lvol_store_uuid": "49996c7b-b1e7-4603-90b9-73042b47e102", 00:10:49.328 "base_bdev": "aio_bdev", 00:10:49.328 "thin_provision": false, 00:10:49.328 "num_allocated_clusters": 38, 00:10:49.328 "snapshot": false, 00:10:49.328 "clone": false, 00:10:49.328 "esnap_clone": false 00:10:49.328 } 00:10:49.328 } 00:10:49.328 } 00:10:49.328 ] 00:10:49.328 23:54:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:10:49.328 23:54:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 49996c7b-b1e7-4603-90b9-73042b47e102 00:10:49.328 23:54:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:10:49.587 23:54:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:10:49.587 23:54:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 49996c7b-b1e7-4603-90b9-73042b47e102 00:10:49.587 23:54:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:10:49.846 23:54:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:10:49.846 23:54:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:50.106 [2024-11-18 23:54:56.590787] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:10:50.106 23:54:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 49996c7b-b1e7-4603-90b9-73042b47e102 00:10:50.106 23:54:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:10:50.106 23:54:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 49996c7b-b1e7-4603-90b9-73042b47e102 00:10:50.106 23:54:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:50.106 23:54:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:50.106 23:54:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:50.106 23:54:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:50.106 23:54:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:50.106 23:54:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:50.106 23:54:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:50.106 23:54:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:10:50.106 23:54:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 49996c7b-b1e7-4603-90b9-73042b47e102 00:10:50.365 request: 00:10:50.365 { 00:10:50.365 "uuid": "49996c7b-b1e7-4603-90b9-73042b47e102", 00:10:50.365 "method": "bdev_lvol_get_lvstores", 00:10:50.365 "req_id": 1 00:10:50.365 } 00:10:50.365 Got JSON-RPC error response 00:10:50.365 response: 00:10:50.365 { 00:10:50.365 "code": -19, 00:10:50.365 "message": "No such device" 00:10:50.365 } 00:10:50.365 23:54:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:10:50.365 23:54:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:50.365 23:54:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:50.365 23:54:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:50.365 23:54:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:50.624 aio_bdev 00:10:50.624 23:54:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 6fd2d516-2242-4122-84fa-484034a46752 00:10:50.624 23:54:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=6fd2d516-2242-4122-84fa-484034a46752 00:10:50.624 23:54:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:50.624 23:54:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:10:50.624 23:54:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:50.624 23:54:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:50.624 23:54:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:50.883 23:54:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6fd2d516-2242-4122-84fa-484034a46752 -t 2000 00:10:51.142 [ 00:10:51.142 { 00:10:51.142 "name": "6fd2d516-2242-4122-84fa-484034a46752", 00:10:51.142 "aliases": [ 00:10:51.142 "lvs/lvol" 00:10:51.142 ], 00:10:51.142 "product_name": "Logical Volume", 00:10:51.142 "block_size": 4096, 00:10:51.142 "num_blocks": 38912, 00:10:51.142 "uuid": "6fd2d516-2242-4122-84fa-484034a46752", 00:10:51.142 "assigned_rate_limits": { 00:10:51.142 "rw_ios_per_sec": 0, 00:10:51.142 "rw_mbytes_per_sec": 0, 00:10:51.142 "r_mbytes_per_sec": 0, 00:10:51.142 "w_mbytes_per_sec": 0 00:10:51.142 }, 00:10:51.142 "claimed": false, 00:10:51.142 "zoned": false, 00:10:51.142 "supported_io_types": { 00:10:51.142 "read": true, 00:10:51.142 "write": true, 00:10:51.142 "unmap": true, 00:10:51.142 "flush": false, 00:10:51.142 "reset": true, 00:10:51.142 "nvme_admin": false, 00:10:51.142 "nvme_io": false, 00:10:51.142 "nvme_io_md": false, 00:10:51.142 "write_zeroes": true, 00:10:51.142 "zcopy": false, 00:10:51.142 "get_zone_info": false, 00:10:51.142 "zone_management": false, 00:10:51.142 "zone_append": false, 00:10:51.142 "compare": false, 00:10:51.142 "compare_and_write": false, 00:10:51.142 "abort": false, 00:10:51.142 "seek_hole": true, 00:10:51.142 "seek_data": true, 00:10:51.142 "copy": false, 00:10:51.142 "nvme_iov_md": false 00:10:51.142 }, 00:10:51.142 "driver_specific": { 00:10:51.142 "lvol": { 00:10:51.142 "lvol_store_uuid": "49996c7b-b1e7-4603-90b9-73042b47e102", 00:10:51.143 "base_bdev": "aio_bdev", 00:10:51.143 "thin_provision": false, 00:10:51.143 "num_allocated_clusters": 38, 00:10:51.143 "snapshot": false, 00:10:51.143 "clone": false, 00:10:51.143 "esnap_clone": false 00:10:51.143 } 00:10:51.143 } 00:10:51.143 } 00:10:51.143 ] 00:10:51.143 23:54:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:10:51.143 23:54:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 49996c7b-b1e7-4603-90b9-73042b47e102 00:10:51.143 23:54:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:10:51.407 23:54:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:10:51.407 23:54:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 49996c7b-b1e7-4603-90b9-73042b47e102 00:10:51.407 23:54:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:10:51.704 23:54:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:10:51.704 23:54:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 6fd2d516-2242-4122-84fa-484034a46752 00:10:51.966 23:54:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 49996c7b-b1e7-4603-90b9-73042b47e102 00:10:52.225 23:54:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:52.483 23:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:52.740 ************************************ 00:10:52.740 END TEST lvs_grow_dirty 00:10:52.740 ************************************ 00:10:52.740 00:10:52.740 real 0m21.641s 00:10:52.740 user 0m45.623s 00:10:52.740 sys 0m8.906s 00:10:52.741 23:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:52.741 23:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:52.741 23:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:10:52.741 23:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:10:52.741 23:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:10:52.741 23:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:10:52.741 23:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:10:52.741 23:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:10:52.741 23:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:10:52.741 23:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:10:52.741 23:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:10:52.741 nvmf_trace.0 00:10:52.998 23:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:10:52.998 23:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:10:52.998 23:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:52.998 23:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:10:53.258 23:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:53.258 23:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:10:53.258 23:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:53.258 23:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:53.258 rmmod nvme_tcp 00:10:53.258 rmmod nvme_fabrics 00:10:53.258 rmmod nvme_keyring 00:10:53.258 23:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:53.258 23:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:10:53.258 23:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:10:53.258 23:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 66043 ']' 00:10:53.258 23:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 66043 00:10:53.258 23:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 66043 ']' 00:10:53.258 23:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 66043 00:10:53.258 23:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:10:53.258 23:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:53.258 23:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66043 00:10:53.258 killing process with pid 66043 00:10:53.258 23:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:53.258 23:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:53.258 23:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66043' 00:10:53.258 23:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 66043 00:10:53.258 23:54:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 66043 00:10:54.193 23:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:54.193 23:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:54.193 23:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:54.193 23:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:10:54.193 23:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:10:54.193 23:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:54.193 23:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:10:54.193 23:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:54.193 23:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:54.193 23:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:54.193 23:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:54.193 23:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:54.193 23:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:54.193 23:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:54.193 23:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:54.193 23:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:54.193 23:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:54.193 23:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:54.452 23:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:54.452 23:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:54.452 23:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:54.452 23:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:54.452 23:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:54.452 23:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:54.452 23:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:54.452 23:55:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:54.452 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@300 -- # return 0 00:10:54.452 ************************************ 00:10:54.452 END TEST nvmf_lvs_grow 00:10:54.452 ************************************ 00:10:54.452 00:10:54.452 real 0m44.448s 00:10:54.452 user 1m11.338s 00:10:54.452 sys 0m12.293s 00:10:54.452 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:54.452 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:54.452 23:55:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:10:54.452 23:55:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:54.452 23:55:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:54.452 23:55:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:54.452 ************************************ 00:10:54.452 START TEST nvmf_bdev_io_wait 00:10:54.452 ************************************ 00:10:54.452 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:10:54.452 * Looking for test storage... 00:10:54.452 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:54.452 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:54.712 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:10:54.712 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:54.712 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:54.712 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:54.712 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:54.712 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:54.712 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:10:54.712 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:10:54.712 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:10:54.712 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:10:54.712 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:10:54.712 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:10:54.712 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:10:54.712 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:54.712 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:10:54.712 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:10:54.712 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:54.712 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:54.712 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:10:54.712 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:10:54.712 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:54.712 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:10:54.712 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:10:54.712 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:10:54.712 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:10:54.712 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:54.712 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:10:54.712 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:10:54.712 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:54.712 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:54.712 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:10:54.712 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:54.712 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:54.712 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:54.712 --rc genhtml_branch_coverage=1 00:10:54.712 --rc genhtml_function_coverage=1 00:10:54.712 --rc genhtml_legend=1 00:10:54.712 --rc geninfo_all_blocks=1 00:10:54.712 --rc geninfo_unexecuted_blocks=1 00:10:54.712 00:10:54.712 ' 00:10:54.712 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:54.712 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:54.712 --rc genhtml_branch_coverage=1 00:10:54.712 --rc genhtml_function_coverage=1 00:10:54.712 --rc genhtml_legend=1 00:10:54.712 --rc geninfo_all_blocks=1 00:10:54.712 --rc geninfo_unexecuted_blocks=1 00:10:54.712 00:10:54.712 ' 00:10:54.712 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:54.712 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:54.712 --rc genhtml_branch_coverage=1 00:10:54.712 --rc genhtml_function_coverage=1 00:10:54.712 --rc genhtml_legend=1 00:10:54.712 --rc geninfo_all_blocks=1 00:10:54.712 --rc geninfo_unexecuted_blocks=1 00:10:54.712 00:10:54.712 ' 00:10:54.712 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:54.712 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:54.712 --rc genhtml_branch_coverage=1 00:10:54.712 --rc genhtml_function_coverage=1 00:10:54.712 --rc genhtml_legend=1 00:10:54.712 --rc geninfo_all_blocks=1 00:10:54.712 --rc geninfo_unexecuted_blocks=1 00:10:54.712 00:10:54.712 ' 00:10:54.712 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:54.712 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:10:54.712 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:54.712 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:54.712 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:54.712 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:54.712 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:54.712 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:54.712 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:54.712 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:54.712 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:54.712 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:54.712 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:10:54.712 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:10:54.712 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:54.712 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:54.712 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:54.712 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:54.712 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:54.712 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:10:54.712 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:54.712 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:54.712 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:54.712 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.712 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.713 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.713 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:10:54.713 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.713 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:10:54.713 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:54.713 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:54.713 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:54.713 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:54.713 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:54.713 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:54.713 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:54.713 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:54.713 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:54.713 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:54.713 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:54.713 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:54.713 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:10:54.713 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:54.713 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:54.713 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:54.713 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:54.713 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:54.713 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:54.713 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:54.713 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:54.713 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:54.713 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:54.713 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:54.713 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:54.713 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:54.713 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:54.713 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:54.713 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:54.713 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:54.713 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:54.713 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:54.713 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:54.713 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:54.713 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:54.713 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:54.713 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:54.713 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:54.713 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:54.713 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:54.713 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:54.713 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:54.713 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:54.713 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:54.713 Cannot find device "nvmf_init_br" 00:10:54.713 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:10:54.713 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:54.713 Cannot find device "nvmf_init_br2" 00:10:54.713 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:10:54.713 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:54.713 Cannot find device "nvmf_tgt_br" 00:10:54.713 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # true 00:10:54.713 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:54.713 Cannot find device "nvmf_tgt_br2" 00:10:54.713 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # true 00:10:54.713 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:54.713 Cannot find device "nvmf_init_br" 00:10:54.713 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # true 00:10:54.713 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:54.713 Cannot find device "nvmf_init_br2" 00:10:54.713 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # true 00:10:54.713 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:54.713 Cannot find device "nvmf_tgt_br" 00:10:54.713 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # true 00:10:54.713 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:54.713 Cannot find device "nvmf_tgt_br2" 00:10:54.713 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # true 00:10:54.713 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:54.713 Cannot find device "nvmf_br" 00:10:54.713 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # true 00:10:54.713 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:54.713 Cannot find device "nvmf_init_if" 00:10:54.713 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # true 00:10:54.713 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:54.713 Cannot find device "nvmf_init_if2" 00:10:54.713 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # true 00:10:54.713 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:54.713 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:54.713 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # true 00:10:54.713 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:54.972 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:54.972 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # true 00:10:54.972 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:54.972 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:54.972 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:54.972 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:54.972 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:54.972 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:54.972 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:54.972 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:54.972 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:54.972 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:54.972 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:54.972 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:54.972 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:54.972 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:54.972 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:54.972 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:54.972 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:54.972 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:54.972 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:54.972 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:54.972 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:54.972 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:54.972 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:54.972 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:54.972 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:54.972 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:55.231 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:55.231 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:55.231 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:55.231 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:55.231 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:55.231 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:55.231 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:55.231 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:55.231 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.089 ms 00:10:55.231 00:10:55.231 --- 10.0.0.3 ping statistics --- 00:10:55.231 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:55.231 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:10:55.231 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:55.231 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:55.231 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.049 ms 00:10:55.231 00:10:55.231 --- 10.0.0.4 ping statistics --- 00:10:55.231 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:55.231 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:10:55.231 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:55.232 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:55.232 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:10:55.232 00:10:55.232 --- 10.0.0.1 ping statistics --- 00:10:55.232 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:55.232 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:10:55.232 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:55.232 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:55.232 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:10:55.232 00:10:55.232 --- 10.0.0.2 ping statistics --- 00:10:55.232 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:55.232 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:10:55.232 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:55.232 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@461 -- # return 0 00:10:55.232 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:55.232 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:55.232 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:55.232 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:55.232 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:55.232 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:55.232 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:55.232 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:10:55.232 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:55.232 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:55.232 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:55.232 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=66422 00:10:55.232 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:10:55.232 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 66422 00:10:55.232 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 66422 ']' 00:10:55.232 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:55.232 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:55.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:55.232 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:55.232 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:55.232 23:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:55.232 [2024-11-18 23:55:01.834301] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:10:55.232 [2024-11-18 23:55:01.834467] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:55.491 [2024-11-18 23:55:02.026660] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:55.491 [2024-11-18 23:55:02.158627] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:55.491 [2024-11-18 23:55:02.158692] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:55.491 [2024-11-18 23:55:02.158715] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:55.491 [2024-11-18 23:55:02.158730] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:55.491 [2024-11-18 23:55:02.158745] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:55.491 [2024-11-18 23:55:02.160853] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:55.491 [2024-11-18 23:55:02.160988] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:55.491 [2024-11-18 23:55:02.161184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:55.491 [2024-11-18 23:55:02.161714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:56.427 23:55:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:56.427 23:55:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:10:56.427 23:55:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:56.427 23:55:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:56.427 23:55:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:56.427 23:55:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:56.427 23:55:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:10:56.427 23:55:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.427 23:55:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:56.427 23:55:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.427 23:55:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:10:56.427 23:55:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.427 23:55:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:56.427 [2024-11-18 23:55:03.089439] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:56.427 23:55:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.427 23:55:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:56.427 23:55:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.427 23:55:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:56.427 [2024-11-18 23:55:03.110791] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:56.686 23:55:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.686 23:55:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:56.686 23:55:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.686 23:55:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:56.686 Malloc0 00:10:56.686 23:55:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.686 23:55:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:56.686 23:55:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.686 23:55:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:56.686 23:55:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.686 23:55:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:56.686 23:55:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.686 23:55:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:56.686 23:55:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.686 23:55:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:56.686 23:55:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.686 23:55:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:56.686 [2024-11-18 23:55:03.216331] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:56.686 23:55:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.686 23:55:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=66457 00:10:56.686 23:55:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:10:56.686 23:55:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:10:56.686 23:55:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:56.686 23:55:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:56.686 23:55:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=66459 00:10:56.686 23:55:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:56.686 23:55:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:56.686 { 00:10:56.686 "params": { 00:10:56.686 "name": "Nvme$subsystem", 00:10:56.686 "trtype": "$TEST_TRANSPORT", 00:10:56.686 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:56.686 "adrfam": "ipv4", 00:10:56.686 "trsvcid": "$NVMF_PORT", 00:10:56.686 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:56.686 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:56.686 "hdgst": ${hdgst:-false}, 00:10:56.686 "ddgst": ${ddgst:-false} 00:10:56.686 }, 00:10:56.686 "method": "bdev_nvme_attach_controller" 00:10:56.686 } 00:10:56.686 EOF 00:10:56.686 )") 00:10:56.686 23:55:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:10:56.686 23:55:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:56.686 23:55:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:10:56.686 23:55:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:56.686 23:55:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:56.686 23:55:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:56.686 { 00:10:56.686 "params": { 00:10:56.686 "name": "Nvme$subsystem", 00:10:56.686 "trtype": "$TEST_TRANSPORT", 00:10:56.686 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:56.686 "adrfam": "ipv4", 00:10:56.686 "trsvcid": "$NVMF_PORT", 00:10:56.686 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:56.686 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:56.686 "hdgst": ${hdgst:-false}, 00:10:56.686 "ddgst": ${ddgst:-false} 00:10:56.686 }, 00:10:56.686 "method": "bdev_nvme_attach_controller" 00:10:56.686 } 00:10:56.686 EOF 00:10:56.686 )") 00:10:56.686 23:55:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=66461 00:10:56.686 23:55:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:56.686 23:55:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=66465 00:10:56.686 23:55:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:56.686 23:55:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:10:56.686 23:55:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:10:56.686 23:55:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:10:56.686 23:55:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:56.686 23:55:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:56.686 23:55:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:56.686 23:55:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:56.686 { 00:10:56.686 "params": { 00:10:56.686 "name": "Nvme$subsystem", 00:10:56.686 "trtype": "$TEST_TRANSPORT", 00:10:56.686 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:56.686 "adrfam": "ipv4", 00:10:56.686 "trsvcid": "$NVMF_PORT", 00:10:56.686 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:56.686 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:56.686 "hdgst": ${hdgst:-false}, 00:10:56.686 "ddgst": ${ddgst:-false} 00:10:56.686 }, 00:10:56.686 "method": "bdev_nvme_attach_controller" 00:10:56.686 } 00:10:56.686 EOF 00:10:56.686 )") 00:10:56.686 23:55:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:56.686 23:55:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:10:56.686 23:55:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:56.686 23:55:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:56.686 23:55:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:56.686 23:55:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:56.686 "params": { 00:10:56.686 "name": "Nvme1", 00:10:56.686 "trtype": "tcp", 00:10:56.686 "traddr": "10.0.0.3", 00:10:56.686 "adrfam": "ipv4", 00:10:56.686 "trsvcid": "4420", 00:10:56.686 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:56.686 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:56.686 "hdgst": false, 00:10:56.686 "ddgst": false 00:10:56.686 }, 00:10:56.686 "method": "bdev_nvme_attach_controller" 00:10:56.686 }' 00:10:56.686 23:55:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:10:56.686 23:55:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:56.686 23:55:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:56.686 23:55:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:56.686 23:55:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:56.686 { 00:10:56.686 "params": { 00:10:56.686 "name": "Nvme$subsystem", 00:10:56.686 "trtype": "$TEST_TRANSPORT", 00:10:56.686 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:56.686 "adrfam": "ipv4", 00:10:56.686 "trsvcid": "$NVMF_PORT", 00:10:56.686 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:56.686 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:56.686 "hdgst": ${hdgst:-false}, 00:10:56.686 "ddgst": ${ddgst:-false} 00:10:56.686 }, 00:10:56.686 "method": "bdev_nvme_attach_controller" 00:10:56.686 } 00:10:56.686 EOF 00:10:56.686 )") 00:10:56.686 23:55:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:56.686 23:55:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:56.686 "params": { 00:10:56.686 "name": "Nvme1", 00:10:56.686 "trtype": "tcp", 00:10:56.686 "traddr": "10.0.0.3", 00:10:56.686 "adrfam": "ipv4", 00:10:56.686 "trsvcid": "4420", 00:10:56.686 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:56.686 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:56.686 "hdgst": false, 00:10:56.686 "ddgst": false 00:10:56.686 }, 00:10:56.686 "method": "bdev_nvme_attach_controller" 00:10:56.686 }' 00:10:56.686 23:55:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:56.686 23:55:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:56.686 23:55:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:56.686 23:55:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:56.686 "params": { 00:10:56.686 "name": "Nvme1", 00:10:56.686 "trtype": "tcp", 00:10:56.686 "traddr": "10.0.0.3", 00:10:56.686 "adrfam": "ipv4", 00:10:56.686 "trsvcid": "4420", 00:10:56.686 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:56.686 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:56.686 "hdgst": false, 00:10:56.686 "ddgst": false 00:10:56.686 }, 00:10:56.686 "method": "bdev_nvme_attach_controller" 00:10:56.686 }' 00:10:56.687 23:55:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:56.687 23:55:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:56.687 23:55:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:56.687 "params": { 00:10:56.687 "name": "Nvme1", 00:10:56.687 "trtype": "tcp", 00:10:56.687 "traddr": "10.0.0.3", 00:10:56.687 "adrfam": "ipv4", 00:10:56.687 "trsvcid": "4420", 00:10:56.687 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:56.687 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:56.687 "hdgst": false, 00:10:56.687 "ddgst": false 00:10:56.687 }, 00:10:56.687 "method": "bdev_nvme_attach_controller" 00:10:56.687 }' 00:10:56.687 23:55:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 66457 00:10:56.687 [2024-11-18 23:55:03.340365] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:10:56.687 [2024-11-18 23:55:03.340770] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:10:56.687 [2024-11-18 23:55:03.371053] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:10:56.687 [2024-11-18 23:55:03.371437] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:10:56.945 [2024-11-18 23:55:03.378141] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:10:56.945 [2024-11-18 23:55:03.378871] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:10:56.945 [2024-11-18 23:55:03.384272] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:10:56.945 [2024-11-18 23:55:03.384562] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:10:56.945 [2024-11-18 23:55:03.565486] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:56.945 [2024-11-18 23:55:03.610629] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:57.204 [2024-11-18 23:55:03.693616] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:57.204 [2024-11-18 23:55:03.701694] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:57.204 [2024-11-18 23:55:03.704388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:57.204 [2024-11-18 23:55:03.746290] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:10:57.204 [2024-11-18 23:55:03.813010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:10:57.204 [2024-11-18 23:55:03.856192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:10:57.204 [2024-11-18 23:55:03.881780] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:57.463 [2024-11-18 23:55:03.936813] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:57.463 [2024-11-18 23:55:04.003052] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:57.463 [2024-11-18 23:55:04.021126] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:57.463 Running I/O for 1 seconds... 00:10:57.463 Running I/O for 1 seconds... 00:10:57.721 Running I/O for 1 seconds... 00:10:57.721 Running I/O for 1 seconds... 00:10:58.655 4740.00 IOPS, 18.52 MiB/s 00:10:58.655 Latency(us) 00:10:58.655 [2024-11-18T23:55:05.347Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:58.655 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:10:58.655 Nvme1n1 : 1.03 4728.31 18.47 0.00 0.00 26586.25 3485.32 43611.23 00:10:58.655 [2024-11-18T23:55:05.347Z] =================================================================================================================== 00:10:58.655 [2024-11-18T23:55:05.347Z] Total : 4728.31 18.47 0.00 0.00 26586.25 3485.32 43611.23 00:10:58.655 4605.00 IOPS, 17.99 MiB/s 00:10:58.655 Latency(us) 00:10:58.655 [2024-11-18T23:55:05.347Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:58.655 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:10:58.655 Nvme1n1 : 1.01 4722.11 18.45 0.00 0.00 26987.06 7804.74 48854.11 00:10:58.655 [2024-11-18T23:55:05.347Z] =================================================================================================================== 00:10:58.655 [2024-11-18T23:55:05.347Z] Total : 4722.11 18.45 0.00 0.00 26987.06 7804.74 48854.11 00:10:58.655 6494.00 IOPS, 25.37 MiB/s 00:10:58.655 Latency(us) 00:10:58.655 [2024-11-18T23:55:05.347Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:58.655 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:10:58.655 Nvme1n1 : 1.01 6535.99 25.53 0.00 0.00 19449.85 8460.10 28120.90 00:10:58.655 [2024-11-18T23:55:05.347Z] =================================================================================================================== 00:10:58.655 [2024-11-18T23:55:05.347Z] Total : 6535.99 25.53 0.00 0.00 19449.85 8460.10 28120.90 00:10:58.655 138632.00 IOPS, 541.53 MiB/s 00:10:58.655 Latency(us) 00:10:58.655 [2024-11-18T23:55:05.347Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:58.655 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:10:58.655 Nvme1n1 : 1.00 138279.51 540.15 0.00 0.00 920.71 491.52 4140.68 00:10:58.655 [2024-11-18T23:55:05.347Z] =================================================================================================================== 00:10:58.655 [2024-11-18T23:55:05.347Z] Total : 138279.51 540.15 0.00 0.00 920.71 491.52 4140.68 00:10:59.222 23:55:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 66459 00:10:59.222 23:55:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 66461 00:10:59.222 23:55:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 66465 00:10:59.222 23:55:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:59.222 23:55:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.222 23:55:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:59.222 23:55:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.222 23:55:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:10:59.222 23:55:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:10:59.222 23:55:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:59.222 23:55:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:10:59.222 23:55:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:59.222 23:55:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:10:59.222 23:55:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:59.222 23:55:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:59.222 rmmod nvme_tcp 00:10:59.222 rmmod nvme_fabrics 00:10:59.222 rmmod nvme_keyring 00:10:59.222 23:55:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:59.222 23:55:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:10:59.222 23:55:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:10:59.222 23:55:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 66422 ']' 00:10:59.222 23:55:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 66422 00:10:59.222 23:55:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 66422 ']' 00:10:59.222 23:55:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 66422 00:10:59.481 23:55:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:10:59.481 23:55:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:59.481 23:55:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66422 00:10:59.481 killing process with pid 66422 00:10:59.481 23:55:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:59.481 23:55:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:59.481 23:55:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66422' 00:10:59.481 23:55:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 66422 00:10:59.481 23:55:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 66422 00:11:00.416 23:55:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:00.416 23:55:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:00.416 23:55:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:00.417 23:55:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:11:00.417 23:55:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:00.417 23:55:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:11:00.417 23:55:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:11:00.417 23:55:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:00.417 23:55:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:00.417 23:55:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:00.417 23:55:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:00.417 23:55:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:00.417 23:55:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:00.417 23:55:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:00.417 23:55:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:00.417 23:55:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:00.417 23:55:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:00.417 23:55:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:00.417 23:55:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:00.417 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:00.417 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:00.417 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:00.417 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:00.417 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:00.417 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:00.417 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:00.676 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@300 -- # return 0 00:11:00.676 ************************************ 00:11:00.676 END TEST nvmf_bdev_io_wait 00:11:00.676 ************************************ 00:11:00.676 00:11:00.676 real 0m6.048s 00:11:00.676 user 0m25.755s 00:11:00.676 sys 0m2.596s 00:11:00.676 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:00.676 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:00.676 23:55:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:11:00.676 23:55:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:00.676 23:55:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:00.676 23:55:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:00.676 ************************************ 00:11:00.676 START TEST nvmf_queue_depth 00:11:00.676 ************************************ 00:11:00.676 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:11:00.676 * Looking for test storage... 00:11:00.676 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:00.676 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:00.676 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:11:00.676 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:00.676 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:00.676 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:00.676 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:00.676 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:00.676 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:11:00.676 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:11:00.676 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:11:00.676 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:11:00.676 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:11:00.676 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:11:00.676 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:11:00.676 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:00.676 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:11:00.676 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:11:00.676 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:00.676 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:00.676 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:11:00.676 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:11:00.676 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:00.676 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:11:00.676 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:11:00.676 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:11:00.676 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:11:00.677 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:00.677 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:11:00.677 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:11:00.677 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:00.677 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:00.677 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:11:00.677 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:00.677 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:00.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.677 --rc genhtml_branch_coverage=1 00:11:00.677 --rc genhtml_function_coverage=1 00:11:00.677 --rc genhtml_legend=1 00:11:00.677 --rc geninfo_all_blocks=1 00:11:00.677 --rc geninfo_unexecuted_blocks=1 00:11:00.677 00:11:00.677 ' 00:11:00.677 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:00.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.677 --rc genhtml_branch_coverage=1 00:11:00.677 --rc genhtml_function_coverage=1 00:11:00.677 --rc genhtml_legend=1 00:11:00.677 --rc geninfo_all_blocks=1 00:11:00.677 --rc geninfo_unexecuted_blocks=1 00:11:00.677 00:11:00.677 ' 00:11:00.677 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:00.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.677 --rc genhtml_branch_coverage=1 00:11:00.677 --rc genhtml_function_coverage=1 00:11:00.677 --rc genhtml_legend=1 00:11:00.677 --rc geninfo_all_blocks=1 00:11:00.677 --rc geninfo_unexecuted_blocks=1 00:11:00.677 00:11:00.677 ' 00:11:00.677 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:00.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.677 --rc genhtml_branch_coverage=1 00:11:00.677 --rc genhtml_function_coverage=1 00:11:00.677 --rc genhtml_legend=1 00:11:00.677 --rc geninfo_all_blocks=1 00:11:00.677 --rc geninfo_unexecuted_blocks=1 00:11:00.677 00:11:00.677 ' 00:11:00.677 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:00.677 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:11:00.677 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:00.677 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:00.677 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:00.677 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:00.677 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:00.677 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:00.677 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:00.677 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:00.677 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:00.677 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:00.677 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:11:00.677 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:11:00.677 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:00.677 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:00.677 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:00.677 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:00.677 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:00.677 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:11:00.677 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:00.677 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:00.677 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:00.677 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.677 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.677 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.677 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:11:00.677 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.677 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:11:00.677 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:00.677 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:00.677 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:00.677 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:00.677 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:00.677 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:00.677 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:00.677 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:00.677 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:00.677 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:00.677 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:11:00.677 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:11:00.677 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:00.677 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:11:00.677 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:00.677 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:00.677 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:00.677 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:00.677 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:00.677 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:00.677 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:00.677 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:00.677 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:00.677 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:00.677 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:00.677 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:00.677 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:00.677 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:00.677 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:00.677 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:00.677 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:00.677 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:00.677 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:00.677 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:00.677 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:00.677 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:00.677 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:00.677 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:00.678 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:00.678 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:00.678 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:00.678 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:00.678 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:00.678 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:00.678 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:00.936 Cannot find device "nvmf_init_br" 00:11:00.936 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:11:00.936 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:00.936 Cannot find device "nvmf_init_br2" 00:11:00.936 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:11:00.936 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:00.936 Cannot find device "nvmf_tgt_br" 00:11:00.936 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # true 00:11:00.936 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:00.936 Cannot find device "nvmf_tgt_br2" 00:11:00.936 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # true 00:11:00.936 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:00.936 Cannot find device "nvmf_init_br" 00:11:00.936 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # true 00:11:00.936 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:00.936 Cannot find device "nvmf_init_br2" 00:11:00.936 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # true 00:11:00.936 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:00.936 Cannot find device "nvmf_tgt_br" 00:11:00.936 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # true 00:11:00.936 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:00.936 Cannot find device "nvmf_tgt_br2" 00:11:00.936 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # true 00:11:00.936 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:00.936 Cannot find device "nvmf_br" 00:11:00.936 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # true 00:11:00.936 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:00.936 Cannot find device "nvmf_init_if" 00:11:00.936 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # true 00:11:00.936 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:00.936 Cannot find device "nvmf_init_if2" 00:11:00.936 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # true 00:11:00.936 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:00.936 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:00.936 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # true 00:11:00.936 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:00.936 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:00.936 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # true 00:11:00.936 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:00.936 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:00.936 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:00.936 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:00.936 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:00.936 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:00.936 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:00.936 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:00.936 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:01.195 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:01.195 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:01.195 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:01.195 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:01.195 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:01.195 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:01.195 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:01.195 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:01.195 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:01.195 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:01.195 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:01.195 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:01.195 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:01.195 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:01.195 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:01.195 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:01.195 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:01.195 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:01.195 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:01.195 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:01.195 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:01.195 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:01.195 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:01.195 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:01.195 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:01.195 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:11:01.195 00:11:01.195 --- 10.0.0.3 ping statistics --- 00:11:01.195 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:01.195 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:11:01.195 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:01.195 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:01.195 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 00:11:01.195 00:11:01.195 --- 10.0.0.4 ping statistics --- 00:11:01.195 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:01.195 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:11:01.195 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:01.195 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:01.195 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:11:01.195 00:11:01.195 --- 10.0.0.1 ping statistics --- 00:11:01.195 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:01.195 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:11:01.195 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:01.195 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:01.195 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:11:01.195 00:11:01.195 --- 10.0.0.2 ping statistics --- 00:11:01.195 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:01.195 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:11:01.195 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:01.195 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@461 -- # return 0 00:11:01.195 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:01.195 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:01.195 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:01.195 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:01.195 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:01.195 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:01.195 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:01.195 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:11:01.195 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:01.195 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:01.195 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:01.195 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=66768 00:11:01.195 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:01.195 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 66768 00:11:01.195 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 66768 ']' 00:11:01.195 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:01.195 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:01.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:01.195 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:01.195 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:01.195 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:01.454 [2024-11-18 23:55:07.905139] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:11:01.454 [2024-11-18 23:55:07.905306] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:01.454 [2024-11-18 23:55:08.091965] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:01.712 [2024-11-18 23:55:08.195986] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:01.712 [2024-11-18 23:55:08.196047] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:01.712 [2024-11-18 23:55:08.196068] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:01.712 [2024-11-18 23:55:08.196092] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:01.712 [2024-11-18 23:55:08.196107] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:01.712 [2024-11-18 23:55:08.197282] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:01.712 [2024-11-18 23:55:08.380713] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:02.277 23:55:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:02.277 23:55:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:11:02.277 23:55:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:02.277 23:55:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:02.277 23:55:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:02.277 23:55:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:02.277 23:55:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:02.277 23:55:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.277 23:55:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:02.277 [2024-11-18 23:55:08.894513] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:02.277 23:55:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.277 23:55:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:02.277 23:55:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.277 23:55:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:02.535 Malloc0 00:11:02.535 23:55:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.535 23:55:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:02.535 23:55:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.535 23:55:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:02.535 23:55:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.535 23:55:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:02.535 23:55:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.535 23:55:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:02.535 23:55:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.535 23:55:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:02.535 23:55:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.535 23:55:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:02.535 [2024-11-18 23:55:09.005054] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:02.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:02.535 23:55:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.535 23:55:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=66800 00:11:02.535 23:55:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:02.535 23:55:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 66800 /var/tmp/bdevperf.sock 00:11:02.535 23:55:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:11:02.535 23:55:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 66800 ']' 00:11:02.535 23:55:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:02.535 23:55:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:02.535 23:55:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:02.535 23:55:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:02.535 23:55:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:02.535 [2024-11-18 23:55:09.122978] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:11:02.535 [2024-11-18 23:55:09.123684] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66800 ] 00:11:02.792 [2024-11-18 23:55:09.310244] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:02.792 [2024-11-18 23:55:09.437342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:03.051 [2024-11-18 23:55:09.616066] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:03.617 23:55:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:03.617 23:55:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:11:03.617 23:55:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:11:03.617 23:55:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.617 23:55:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:03.617 NVMe0n1 00:11:03.617 23:55:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.617 23:55:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:03.876 Running I/O for 10 seconds... 00:11:05.749 5762.00 IOPS, 22.51 MiB/s [2024-11-18T23:55:13.378Z] 6080.00 IOPS, 23.75 MiB/s [2024-11-18T23:55:14.753Z] 5932.00 IOPS, 23.17 MiB/s [2024-11-18T23:55:15.689Z] 5888.00 IOPS, 23.00 MiB/s [2024-11-18T23:55:16.622Z] 5838.00 IOPS, 22.80 MiB/s [2024-11-18T23:55:17.559Z] 5819.83 IOPS, 22.73 MiB/s [2024-11-18T23:55:18.496Z] 5851.43 IOPS, 22.86 MiB/s [2024-11-18T23:55:19.467Z] 5874.62 IOPS, 22.95 MiB/s [2024-11-18T23:55:20.404Z] 5891.67 IOPS, 23.01 MiB/s [2024-11-18T23:55:20.663Z] 5890.20 IOPS, 23.01 MiB/s 00:11:13.971 Latency(us) 00:11:13.971 [2024-11-18T23:55:20.663Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:13.971 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:11:13.971 Verification LBA range: start 0x0 length 0x4000 00:11:13.971 NVMe0n1 : 10.11 5926.56 23.15 0.00 0.00 171649.11 25618.62 117249.86 00:11:13.971 [2024-11-18T23:55:20.663Z] =================================================================================================================== 00:11:13.971 [2024-11-18T23:55:20.663Z] Total : 5926.56 23.15 0.00 0.00 171649.11 25618.62 117249.86 00:11:13.971 { 00:11:13.971 "results": [ 00:11:13.971 { 00:11:13.971 "job": "NVMe0n1", 00:11:13.971 "core_mask": "0x1", 00:11:13.971 "workload": "verify", 00:11:13.971 "status": "finished", 00:11:13.971 "verify_range": { 00:11:13.971 "start": 0, 00:11:13.971 "length": 16384 00:11:13.971 }, 00:11:13.971 "queue_depth": 1024, 00:11:13.971 "io_size": 4096, 00:11:13.971 "runtime": 10.111425, 00:11:13.971 "iops": 5926.563268777645, 00:11:13.971 "mibps": 23.150637768662676, 00:11:13.971 "io_failed": 0, 00:11:13.971 "io_timeout": 0, 00:11:13.971 "avg_latency_us": 171649.109320647, 00:11:13.971 "min_latency_us": 25618.618181818183, 00:11:13.971 "max_latency_us": 117249.86181818182 00:11:13.971 } 00:11:13.971 ], 00:11:13.971 "core_count": 1 00:11:13.971 } 00:11:13.971 23:55:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 66800 00:11:13.971 23:55:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 66800 ']' 00:11:13.971 23:55:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 66800 00:11:13.971 23:55:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:11:13.971 23:55:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:13.971 23:55:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66800 00:11:13.971 23:55:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:13.971 killing process with pid 66800 00:11:13.971 Received shutdown signal, test time was about 10.000000 seconds 00:11:13.971 00:11:13.971 Latency(us) 00:11:13.971 [2024-11-18T23:55:20.663Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:13.971 [2024-11-18T23:55:20.663Z] =================================================================================================================== 00:11:13.971 [2024-11-18T23:55:20.663Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:13.971 23:55:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:13.971 23:55:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66800' 00:11:13.971 23:55:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 66800 00:11:13.971 23:55:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 66800 00:11:14.908 23:55:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:11:14.908 23:55:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:11:14.908 23:55:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:14.908 23:55:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:11:14.908 23:55:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:14.908 23:55:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:11:14.908 23:55:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:14.908 23:55:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:14.908 rmmod nvme_tcp 00:11:14.908 rmmod nvme_fabrics 00:11:14.908 rmmod nvme_keyring 00:11:14.908 23:55:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:14.908 23:55:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:11:14.908 23:55:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:11:14.908 23:55:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 66768 ']' 00:11:14.908 23:55:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 66768 00:11:14.908 23:55:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 66768 ']' 00:11:14.908 23:55:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 66768 00:11:14.908 23:55:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:11:14.908 23:55:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:14.908 23:55:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66768 00:11:14.908 killing process with pid 66768 00:11:14.908 23:55:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:11:14.908 23:55:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:11:14.908 23:55:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66768' 00:11:14.908 23:55:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 66768 00:11:14.908 23:55:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 66768 00:11:16.287 23:55:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:16.287 23:55:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:16.287 23:55:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:16.287 23:55:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:11:16.287 23:55:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:16.287 23:55:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:11:16.287 23:55:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:11:16.287 23:55:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:16.287 23:55:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:16.287 23:55:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:16.287 23:55:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:16.287 23:55:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:16.287 23:55:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:16.287 23:55:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:16.287 23:55:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:16.287 23:55:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:16.287 23:55:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:16.287 23:55:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:16.287 23:55:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:16.287 23:55:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:16.287 23:55:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:16.287 23:55:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:16.287 23:55:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:16.287 23:55:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:16.287 23:55:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:16.287 23:55:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:16.287 23:55:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@300 -- # return 0 00:11:16.287 00:11:16.287 real 0m15.626s 00:11:16.287 user 0m26.022s 00:11:16.287 sys 0m2.427s 00:11:16.287 23:55:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:16.287 ************************************ 00:11:16.287 END TEST nvmf_queue_depth 00:11:16.287 ************************************ 00:11:16.287 23:55:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:16.287 23:55:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:11:16.287 23:55:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:16.287 23:55:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:16.287 23:55:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:16.287 ************************************ 00:11:16.287 START TEST nvmf_target_multipath 00:11:16.287 ************************************ 00:11:16.287 23:55:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:11:16.287 * Looking for test storage... 00:11:16.287 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:16.287 23:55:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:16.287 23:55:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:16.287 23:55:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:11:16.547 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:16.547 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:16.547 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:16.547 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:16.547 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:11:16.547 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:11:16.547 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:11:16.547 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:11:16.547 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:11:16.547 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:11:16.547 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:11:16.547 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:16.547 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:11:16.547 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:11:16.547 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:16.548 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:16.548 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:11:16.548 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:11:16.548 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:16.548 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:11:16.548 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:11:16.548 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:11:16.548 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:11:16.548 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:16.548 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:11:16.548 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:11:16.548 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:16.548 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:16.548 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:11:16.548 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:16.548 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:16.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.548 --rc genhtml_branch_coverage=1 00:11:16.548 --rc genhtml_function_coverage=1 00:11:16.548 --rc genhtml_legend=1 00:11:16.548 --rc geninfo_all_blocks=1 00:11:16.548 --rc geninfo_unexecuted_blocks=1 00:11:16.548 00:11:16.548 ' 00:11:16.548 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:16.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.548 --rc genhtml_branch_coverage=1 00:11:16.548 --rc genhtml_function_coverage=1 00:11:16.548 --rc genhtml_legend=1 00:11:16.548 --rc geninfo_all_blocks=1 00:11:16.548 --rc geninfo_unexecuted_blocks=1 00:11:16.548 00:11:16.548 ' 00:11:16.548 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:16.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.548 --rc genhtml_branch_coverage=1 00:11:16.548 --rc genhtml_function_coverage=1 00:11:16.548 --rc genhtml_legend=1 00:11:16.548 --rc geninfo_all_blocks=1 00:11:16.548 --rc geninfo_unexecuted_blocks=1 00:11:16.548 00:11:16.548 ' 00:11:16.548 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:16.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.548 --rc genhtml_branch_coverage=1 00:11:16.548 --rc genhtml_function_coverage=1 00:11:16.548 --rc genhtml_legend=1 00:11:16.548 --rc geninfo_all_blocks=1 00:11:16.548 --rc geninfo_unexecuted_blocks=1 00:11:16.548 00:11:16.548 ' 00:11:16.548 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:16.548 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:11:16.548 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:16.548 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:16.548 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:16.548 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:16.548 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:16.548 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:16.548 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:16.548 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:16.548 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:16.548 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:16.548 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:11:16.548 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:11:16.548 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:16.548 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:16.548 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:16.548 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:16.548 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:16.548 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:11:16.548 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:16.548 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:16.548 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:16.548 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.548 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.548 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.548 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:11:16.548 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.548 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:11:16.548 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:16.548 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:16.548 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:16.548 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:16.548 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:16.548 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:16.548 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:16.548 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:16.548 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:16.548 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:16.548 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:16.548 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:16.548 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:11:16.548 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:16.548 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:11:16.548 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:16.548 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:16.548 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:16.548 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:16.548 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:16.548 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:16.548 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:16.548 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:16.548 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:16.548 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:16.548 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:16.549 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:16.549 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:16.549 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:16.549 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:16.549 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:16.549 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:16.549 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:16.549 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:16.549 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:16.549 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:16.549 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:16.549 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:16.549 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:16.549 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:16.549 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:16.549 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:16.549 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:16.549 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:16.549 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:16.549 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:16.549 Cannot find device "nvmf_init_br" 00:11:16.549 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:11:16.549 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:16.549 Cannot find device "nvmf_init_br2" 00:11:16.549 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:11:16.549 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:16.549 Cannot find device "nvmf_tgt_br" 00:11:16.549 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # true 00:11:16.549 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:16.549 Cannot find device "nvmf_tgt_br2" 00:11:16.549 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # true 00:11:16.549 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:16.549 Cannot find device "nvmf_init_br" 00:11:16.549 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # true 00:11:16.549 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:16.549 Cannot find device "nvmf_init_br2" 00:11:16.549 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # true 00:11:16.549 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:16.549 Cannot find device "nvmf_tgt_br" 00:11:16.549 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # true 00:11:16.549 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:16.549 Cannot find device "nvmf_tgt_br2" 00:11:16.549 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # true 00:11:16.549 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:16.549 Cannot find device "nvmf_br" 00:11:16.549 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # true 00:11:16.549 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:16.549 Cannot find device "nvmf_init_if" 00:11:16.549 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # true 00:11:16.549 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:16.549 Cannot find device "nvmf_init_if2" 00:11:16.549 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # true 00:11:16.549 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:16.549 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:16.549 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # true 00:11:16.549 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:16.549 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:16.549 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # true 00:11:16.549 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:16.549 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:16.549 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:16.549 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:16.808 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:16.808 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:16.808 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:16.808 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:16.808 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:16.808 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:16.808 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:16.808 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:16.808 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:16.808 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:16.808 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:16.808 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:16.808 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:16.808 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:16.808 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:16.808 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:16.808 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:16.808 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:16.808 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:16.808 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:16.808 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:16.808 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:16.808 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:16.808 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:16.808 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:16.808 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:16.808 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:16.808 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:16.808 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:16.808 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:16.808 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 00:11:16.808 00:11:16.808 --- 10.0.0.3 ping statistics --- 00:11:16.808 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:16.808 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:11:16.808 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:16.808 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:16.808 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.071 ms 00:11:16.808 00:11:16.808 --- 10.0.0.4 ping statistics --- 00:11:16.808 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:16.808 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:11:16.808 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:16.808 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:16.808 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.040 ms 00:11:16.808 00:11:16.808 --- 10.0.0.1 ping statistics --- 00:11:16.808 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:16.808 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:11:16.808 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:16.808 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:16.808 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.090 ms 00:11:16.808 00:11:16.808 --- 10.0.0.2 ping statistics --- 00:11:16.808 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:16.808 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:11:16.808 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:16.808 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@461 -- # return 0 00:11:16.808 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:16.808 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:16.808 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:16.808 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:16.808 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:16.809 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:16.809 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:17.068 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.4 ']' 00:11:17.068 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:11:17.068 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:11:17.068 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:17.068 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:17.068 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:17.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:17.068 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@509 -- # nvmfpid=67194 00:11:17.068 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:17.068 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@510 -- # waitforlisten 67194 00:11:17.068 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@835 -- # '[' -z 67194 ']' 00:11:17.068 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:17.068 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:17.068 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:17.068 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:17.068 23:55:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:17.068 [2024-11-18 23:55:23.635236] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:11:17.068 [2024-11-18 23:55:23.635729] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:17.327 [2024-11-18 23:55:23.826401] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:17.327 [2024-11-18 23:55:23.958829] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:17.327 [2024-11-18 23:55:23.959202] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:17.327 [2024-11-18 23:55:23.959385] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:17.327 [2024-11-18 23:55:23.959685] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:17.327 [2024-11-18 23:55:23.959846] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:17.327 [2024-11-18 23:55:23.962255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:17.327 [2024-11-18 23:55:23.962378] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:17.327 [2024-11-18 23:55:23.962512] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:17.327 [2024-11-18 23:55:23.963249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:17.586 [2024-11-18 23:55:24.145875] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:18.155 23:55:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:18.155 23:55:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@868 -- # return 0 00:11:18.155 23:55:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:18.155 23:55:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:18.155 23:55:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:18.155 23:55:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:18.155 23:55:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:18.414 [2024-11-18 23:55:24.897699] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:18.414 23:55:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:11:18.673 Malloc0 00:11:18.673 23:55:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:11:18.931 23:55:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:19.190 23:55:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:19.450 [2024-11-18 23:55:26.006804] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:19.450 23:55:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 00:11:19.709 [2024-11-18 23:55:26.255129] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:11:19.709 23:55:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --hostid=2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:11:19.967 23:55:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --hostid=2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G 00:11:19.967 23:55:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:11:19.967 23:55:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1202 -- # local i=0 00:11:19.967 23:55:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:19.967 23:55:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:19.967 23:55:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # sleep 2 00:11:22.497 23:55:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:22.497 23:55:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:22.497 23:55:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:22.497 23:55:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:22.497 23:55:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:22.498 23:55:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # return 0 00:11:22.498 23:55:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:11:22.498 23:55:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:11:22.498 23:55:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:11:22.498 23:55:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:11:22.498 23:55:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:11:22.498 23:55:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:11:22.498 23:55:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:11:22.498 23:55:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:11:22.498 23:55:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:11:22.498 23:55:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:11:22.498 23:55:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:11:22.498 23:55:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:11:22.498 23:55:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:11:22.498 23:55:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:11:22.498 23:55:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:11:22.498 23:55:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:22.498 23:55:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:22.498 23:55:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:22.498 23:55:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:11:22.498 23:55:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:11:22.498 23:55:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:11:22.498 23:55:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:22.498 23:55:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:22.498 23:55:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:22.498 23:55:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:11:22.498 23:55:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:11:22.498 23:55:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=67287 00:11:22.498 23:55:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:11:22.498 23:55:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:11:22.498 [global] 00:11:22.498 thread=1 00:11:22.498 invalidate=1 00:11:22.498 rw=randrw 00:11:22.498 time_based=1 00:11:22.498 runtime=6 00:11:22.498 ioengine=libaio 00:11:22.498 direct=1 00:11:22.498 bs=4096 00:11:22.498 iodepth=128 00:11:22.498 norandommap=0 00:11:22.498 numjobs=1 00:11:22.498 00:11:22.498 verify_dump=1 00:11:22.498 verify_backlog=512 00:11:22.498 verify_state_save=0 00:11:22.498 do_verify=1 00:11:22.498 verify=crc32c-intel 00:11:22.498 [job0] 00:11:22.498 filename=/dev/nvme0n1 00:11:22.498 Could not set queue depth (nvme0n1) 00:11:22.498 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:22.498 fio-3.35 00:11:22.498 Starting 1 thread 00:11:23.065 23:55:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:11:23.323 23:55:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:11:23.581 23:55:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:11:23.581 23:55:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:11:23.581 23:55:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:23.581 23:55:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:23.581 23:55:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:23.581 23:55:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:11:23.581 23:55:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:11:23.581 23:55:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:11:23.581 23:55:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:23.581 23:55:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:23.581 23:55:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:23.581 23:55:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:11:23.581 23:55:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:11:23.839 23:55:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:11:24.097 23:55:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:11:24.097 23:55:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:11:24.097 23:55:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:24.097 23:55:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:24.097 23:55:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:24.097 23:55:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:11:24.097 23:55:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:11:24.097 23:55:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:11:24.097 23:55:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:24.097 23:55:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:24.097 23:55:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:24.097 23:55:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:11:24.097 23:55:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 67287 00:11:28.286 00:11:28.286 job0: (groupid=0, jobs=1): err= 0: pid=67308: Mon Nov 18 23:55:34 2024 00:11:28.286 read: IOPS=8252, BW=32.2MiB/s (33.8MB/s)(193MiB/6002msec) 00:11:28.286 slat (usec): min=7, max=7592, avg=73.38, stdev=289.01 00:11:28.286 clat (usec): min=1641, max=20261, avg=10617.22, stdev=1824.54 00:11:28.286 lat (usec): min=2238, max=20296, avg=10690.60, stdev=1828.39 00:11:28.286 clat percentiles (usec): 00:11:28.286 | 1.00th=[ 5407], 5.00th=[ 8094], 10.00th=[ 9110], 20.00th=[ 9765], 00:11:28.286 | 30.00th=[10028], 40.00th=[10290], 50.00th=[10421], 60.00th=[10683], 00:11:28.286 | 70.00th=[10945], 80.00th=[11338], 90.00th=[12125], 95.00th=[14877], 00:11:28.286 | 99.00th=[16581], 99.50th=[16909], 99.90th=[17433], 99.95th=[17957], 00:11:28.286 | 99.99th=[18482] 00:11:28.286 bw ( KiB/s): min= 6056, max=21752, per=54.80%, avg=18088.73, stdev=4231.36, samples=11 00:11:28.286 iops : min= 1514, max= 5438, avg=4522.18, stdev=1057.84, samples=11 00:11:28.286 write: IOPS=4843, BW=18.9MiB/s (19.8MB/s)(99.0MiB/5231msec); 0 zone resets 00:11:28.286 slat (usec): min=16, max=2137, avg=81.62, stdev=212.95 00:11:28.286 clat (usec): min=2782, max=17793, avg=9297.34, stdev=1627.81 00:11:28.286 lat (usec): min=2866, max=17822, avg=9378.96, stdev=1634.15 00:11:28.286 clat percentiles (usec): 00:11:28.286 | 1.00th=[ 4146], 5.00th=[ 5407], 10.00th=[ 7635], 20.00th=[ 8717], 00:11:28.286 | 30.00th=[ 9110], 40.00th=[ 9241], 50.00th=[ 9503], 60.00th=[ 9765], 00:11:28.286 | 70.00th=[ 9896], 80.00th=[10159], 90.00th=[10552], 95.00th=[11076], 00:11:28.286 | 99.00th=[14091], 99.50th=[15139], 99.90th=[16319], 99.95th=[16909], 00:11:28.286 | 99.99th=[17433] 00:11:28.286 bw ( KiB/s): min= 6072, max=21224, per=93.19%, avg=18053.09, stdev=4118.03, samples=11 00:11:28.286 iops : min= 1518, max= 5306, avg=4513.27, stdev=1029.51, samples=11 00:11:28.286 lat (msec) : 2=0.01%, 4=0.34%, 10=42.81%, 20=56.85%, 50=0.01% 00:11:28.286 cpu : usr=5.10%, sys=18.78%, ctx=4339, majf=0, minf=102 00:11:28.286 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:11:28.286 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:28.286 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:28.286 issued rwts: total=49531,25335,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:28.286 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:28.286 00:11:28.286 Run status group 0 (all jobs): 00:11:28.286 READ: bw=32.2MiB/s (33.8MB/s), 32.2MiB/s-32.2MiB/s (33.8MB/s-33.8MB/s), io=193MiB (203MB), run=6002-6002msec 00:11:28.286 WRITE: bw=18.9MiB/s (19.8MB/s), 18.9MiB/s-18.9MiB/s (19.8MB/s-19.8MB/s), io=99.0MiB (104MB), run=5231-5231msec 00:11:28.286 00:11:28.286 Disk stats (read/write): 00:11:28.286 nvme0n1: ios=48353/25335, merge=0/0, ticks=495342/222320, in_queue=717662, util=98.75% 00:11:28.286 23:55:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:11:28.853 23:55:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized 00:11:28.853 23:55:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:11:28.853 23:55:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:11:28.853 23:55:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:28.853 23:55:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:28.853 23:55:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:28.853 23:55:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:11:28.853 23:55:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:11:28.853 23:55:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:11:28.853 23:55:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:28.853 23:55:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:28.853 23:55:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:28.853 23:55:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:11:28.853 23:55:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:11:28.853 23:55:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=67389 00:11:28.853 23:55:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:11:28.853 23:55:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:11:29.111 [global] 00:11:29.111 thread=1 00:11:29.111 invalidate=1 00:11:29.111 rw=randrw 00:11:29.111 time_based=1 00:11:29.111 runtime=6 00:11:29.111 ioengine=libaio 00:11:29.111 direct=1 00:11:29.111 bs=4096 00:11:29.111 iodepth=128 00:11:29.111 norandommap=0 00:11:29.111 numjobs=1 00:11:29.111 00:11:29.111 verify_dump=1 00:11:29.111 verify_backlog=512 00:11:29.111 verify_state_save=0 00:11:29.111 do_verify=1 00:11:29.111 verify=crc32c-intel 00:11:29.111 [job0] 00:11:29.111 filename=/dev/nvme0n1 00:11:29.111 Could not set queue depth (nvme0n1) 00:11:29.111 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:29.111 fio-3.35 00:11:29.111 Starting 1 thread 00:11:30.047 23:55:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:11:30.305 23:55:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:11:30.564 23:55:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:11:30.564 23:55:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:11:30.564 23:55:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:30.564 23:55:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:30.564 23:55:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:30.564 23:55:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:11:30.564 23:55:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:11:30.564 23:55:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:11:30.564 23:55:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:30.564 23:55:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:30.564 23:55:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:30.564 23:55:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:11:30.564 23:55:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:11:30.823 23:55:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:11:31.094 23:55:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:11:31.094 23:55:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:11:31.094 23:55:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:31.094 23:55:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:31.094 23:55:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:31.094 23:55:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:11:31.094 23:55:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:11:31.094 23:55:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:11:31.094 23:55:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:31.094 23:55:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:31.094 23:55:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:31.094 23:55:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:11:31.094 23:55:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 67389 00:11:35.342 00:11:35.342 job0: (groupid=0, jobs=1): err= 0: pid=67414: Mon Nov 18 23:55:41 2024 00:11:35.342 read: IOPS=9385, BW=36.7MiB/s (38.4MB/s)(220MiB/6009msec) 00:11:35.342 slat (usec): min=7, max=7482, avg=55.91, stdev=245.50 00:11:35.342 clat (usec): min=462, max=18929, avg=9560.88, stdev=2424.69 00:11:35.342 lat (usec): min=472, max=18942, avg=9616.78, stdev=2443.81 00:11:35.342 clat percentiles (usec): 00:11:35.342 | 1.00th=[ 3556], 5.00th=[ 5145], 10.00th=[ 6063], 20.00th=[ 7439], 00:11:35.342 | 30.00th=[ 8848], 40.00th=[ 9765], 50.00th=[10159], 60.00th=[10290], 00:11:35.342 | 70.00th=[10683], 80.00th=[10945], 90.00th=[11731], 95.00th=[13435], 00:11:35.342 | 99.00th=[16057], 99.50th=[16581], 99.90th=[17433], 99.95th=[17957], 00:11:35.342 | 99.99th=[18744] 00:11:35.342 bw ( KiB/s): min= 3816, max=32944, per=50.47%, avg=18947.33, stdev=8574.50, samples=12 00:11:35.342 iops : min= 954, max= 8236, avg=4736.83, stdev=2143.63, samples=12 00:11:35.342 write: IOPS=5683, BW=22.2MiB/s (23.3MB/s)(112MiB/5035msec); 0 zone resets 00:11:35.342 slat (usec): min=15, max=2038, avg=62.66, stdev=174.11 00:11:35.342 clat (usec): min=1006, max=18582, avg=7822.59, stdev=2459.91 00:11:35.342 lat (usec): min=1034, max=18605, avg=7885.25, stdev=2482.69 00:11:35.342 clat percentiles (usec): 00:11:35.342 | 1.00th=[ 2900], 5.00th=[ 3916], 10.00th=[ 4490], 20.00th=[ 5211], 00:11:35.342 | 30.00th=[ 5932], 40.00th=[ 7111], 50.00th=[ 8717], 60.00th=[ 9241], 00:11:35.342 | 70.00th=[ 9503], 80.00th=[ 9896], 90.00th=[10421], 95.00th=[10814], 00:11:35.342 | 99.00th=[13304], 99.50th=[14353], 99.90th=[16057], 99.95th=[16909], 00:11:35.342 | 99.99th=[17433] 00:11:35.342 bw ( KiB/s): min= 4096, max=32976, per=83.74%, avg=19036.00, stdev=8479.79, samples=12 00:11:35.342 iops : min= 1024, max= 8244, avg=4759.00, stdev=2119.95, samples=12 00:11:35.342 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:11:35.342 lat (msec) : 2=0.16%, 4=2.76%, 10=55.12%, 20=41.93% 00:11:35.342 cpu : usr=5.58%, sys=19.69%, ctx=4797, majf=0, minf=108 00:11:35.342 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:11:35.342 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:35.342 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:35.342 issued rwts: total=56397,28615,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:35.342 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:35.342 00:11:35.342 Run status group 0 (all jobs): 00:11:35.342 READ: bw=36.7MiB/s (38.4MB/s), 36.7MiB/s-36.7MiB/s (38.4MB/s-38.4MB/s), io=220MiB (231MB), run=6009-6009msec 00:11:35.342 WRITE: bw=22.2MiB/s (23.3MB/s), 22.2MiB/s-22.2MiB/s (23.3MB/s-23.3MB/s), io=112MiB (117MB), run=5035-5035msec 00:11:35.342 00:11:35.342 Disk stats (read/write): 00:11:35.342 nvme0n1: ios=55665/28154, merge=0/0, ticks=511448/207132, in_queue=718580, util=98.67% 00:11:35.342 23:55:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:35.342 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:11:35.342 23:55:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:35.342 23:55:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1223 -- # local i=0 00:11:35.342 23:55:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:35.342 23:55:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:35.342 23:55:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:35.342 23:55:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:35.342 23:55:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1235 -- # return 0 00:11:35.342 23:55:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:35.601 23:55:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:11:35.601 23:55:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:11:35.601 23:55:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:11:35.601 23:55:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:11:35.601 23:55:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:35.601 23:55:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:11:35.601 23:55:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:35.601 23:55:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:11:35.601 23:55:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:35.601 23:55:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:35.601 rmmod nvme_tcp 00:11:35.601 rmmod nvme_fabrics 00:11:35.860 rmmod nvme_keyring 00:11:35.860 23:55:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:35.860 23:55:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:11:35.860 23:55:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:11:35.860 23:55:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n 67194 ']' 00:11:35.860 23:55:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # killprocess 67194 00:11:35.860 23:55:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@954 -- # '[' -z 67194 ']' 00:11:35.860 23:55:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@958 -- # kill -0 67194 00:11:35.860 23:55:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # uname 00:11:35.860 23:55:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:35.860 23:55:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67194 00:11:35.860 killing process with pid 67194 00:11:35.860 23:55:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:35.860 23:55:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:35.860 23:55:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67194' 00:11:35.860 23:55:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@973 -- # kill 67194 00:11:35.860 23:55:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@978 -- # wait 67194 00:11:36.796 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:36.796 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:36.796 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:36.796 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:11:36.796 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:11:36.796 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:36.796 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:11:36.796 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:36.796 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:36.796 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:36.796 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:36.796 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:36.796 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:37.055 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:37.055 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:37.055 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:37.055 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:37.055 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:37.055 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:37.055 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:37.055 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:37.055 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:37.055 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:37.055 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:37.055 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:37.055 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:37.055 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@300 -- # return 0 00:11:37.055 ************************************ 00:11:37.055 END TEST nvmf_target_multipath 00:11:37.055 ************************************ 00:11:37.055 00:11:37.055 real 0m20.873s 00:11:37.055 user 1m16.235s 00:11:37.055 sys 0m9.288s 00:11:37.055 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:37.055 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:37.315 23:55:43 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:11:37.315 23:55:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:37.315 23:55:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:37.315 23:55:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:37.315 ************************************ 00:11:37.315 START TEST nvmf_zcopy 00:11:37.315 ************************************ 00:11:37.315 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:11:37.315 * Looking for test storage... 00:11:37.315 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:37.315 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:37.315 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:11:37.315 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:37.315 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:37.315 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:37.315 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:37.315 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:37.315 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:11:37.315 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:11:37.315 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:11:37.315 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:11:37.315 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:11:37.315 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:11:37.315 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:11:37.315 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:37.315 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:11:37.315 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:11:37.315 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:37.315 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:37.315 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:11:37.315 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:11:37.315 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:37.315 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:11:37.315 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:11:37.315 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:11:37.315 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:11:37.315 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:37.315 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:11:37.315 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:11:37.315 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:37.315 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:37.315 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:11:37.315 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:37.315 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:37.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:37.315 --rc genhtml_branch_coverage=1 00:11:37.315 --rc genhtml_function_coverage=1 00:11:37.315 --rc genhtml_legend=1 00:11:37.315 --rc geninfo_all_blocks=1 00:11:37.315 --rc geninfo_unexecuted_blocks=1 00:11:37.315 00:11:37.315 ' 00:11:37.315 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:37.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:37.315 --rc genhtml_branch_coverage=1 00:11:37.315 --rc genhtml_function_coverage=1 00:11:37.315 --rc genhtml_legend=1 00:11:37.315 --rc geninfo_all_blocks=1 00:11:37.315 --rc geninfo_unexecuted_blocks=1 00:11:37.315 00:11:37.315 ' 00:11:37.315 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:37.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:37.315 --rc genhtml_branch_coverage=1 00:11:37.315 --rc genhtml_function_coverage=1 00:11:37.315 --rc genhtml_legend=1 00:11:37.315 --rc geninfo_all_blocks=1 00:11:37.315 --rc geninfo_unexecuted_blocks=1 00:11:37.315 00:11:37.315 ' 00:11:37.315 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:37.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:37.315 --rc genhtml_branch_coverage=1 00:11:37.316 --rc genhtml_function_coverage=1 00:11:37.316 --rc genhtml_legend=1 00:11:37.316 --rc geninfo_all_blocks=1 00:11:37.316 --rc geninfo_unexecuted_blocks=1 00:11:37.316 00:11:37.316 ' 00:11:37.316 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:37.316 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:11:37.316 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:37.316 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:37.316 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:37.316 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:37.316 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:37.316 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:37.316 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:37.316 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:37.316 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:37.316 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:37.316 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:11:37.316 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:11:37.316 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:37.316 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:37.316 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:37.316 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:37.316 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:37.316 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:11:37.316 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:37.316 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:37.316 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:37.316 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:37.316 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:37.316 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:37.316 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:11:37.316 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:37.316 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:11:37.316 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:37.316 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:37.316 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:37.316 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:37.316 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:37.316 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:37.316 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:37.316 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:37.316 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:37.316 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:37.316 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:11:37.316 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:37.316 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:37.316 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:37.316 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:37.316 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:37.316 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:37.316 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:37.316 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:37.316 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:37.316 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:37.316 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:37.316 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:37.316 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:37.316 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:37.316 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:37.316 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:37.316 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:37.316 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:37.316 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:37.316 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:37.316 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:37.316 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:37.316 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:37.316 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:37.316 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:37.316 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:37.316 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:37.316 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:37.316 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:37.316 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:37.316 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:37.316 Cannot find device "nvmf_init_br" 00:11:37.316 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:11:37.316 23:55:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:37.575 Cannot find device "nvmf_init_br2" 00:11:37.575 23:55:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:11:37.575 23:55:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:37.575 Cannot find device "nvmf_tgt_br" 00:11:37.575 23:55:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # true 00:11:37.575 23:55:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:37.575 Cannot find device "nvmf_tgt_br2" 00:11:37.575 23:55:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # true 00:11:37.575 23:55:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:37.575 Cannot find device "nvmf_init_br" 00:11:37.575 23:55:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # true 00:11:37.575 23:55:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:37.575 Cannot find device "nvmf_init_br2" 00:11:37.575 23:55:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # true 00:11:37.575 23:55:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:37.575 Cannot find device "nvmf_tgt_br" 00:11:37.575 23:55:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # true 00:11:37.575 23:55:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:37.575 Cannot find device "nvmf_tgt_br2" 00:11:37.575 23:55:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # true 00:11:37.575 23:55:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:37.575 Cannot find device "nvmf_br" 00:11:37.575 23:55:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # true 00:11:37.575 23:55:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:37.575 Cannot find device "nvmf_init_if" 00:11:37.575 23:55:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # true 00:11:37.575 23:55:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:37.575 Cannot find device "nvmf_init_if2" 00:11:37.575 23:55:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # true 00:11:37.575 23:55:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:37.575 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:37.575 23:55:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # true 00:11:37.575 23:55:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:37.575 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:37.575 23:55:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # true 00:11:37.575 23:55:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:37.575 23:55:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:37.575 23:55:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:37.575 23:55:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:37.575 23:55:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:37.575 23:55:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:37.575 23:55:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:37.575 23:55:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:37.575 23:55:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:37.575 23:55:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:37.576 23:55:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:37.576 23:55:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:37.576 23:55:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:37.576 23:55:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:37.834 23:55:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:37.835 23:55:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:37.835 23:55:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:37.835 23:55:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:37.835 23:55:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:37.835 23:55:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:37.835 23:55:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:37.835 23:55:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:37.835 23:55:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:37.835 23:55:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:37.835 23:55:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:37.835 23:55:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:37.835 23:55:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:37.835 23:55:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:37.835 23:55:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:37.835 23:55:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:37.835 23:55:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:37.835 23:55:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:37.835 23:55:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:37.835 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:37.835 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.078 ms 00:11:37.835 00:11:37.835 --- 10.0.0.3 ping statistics --- 00:11:37.835 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:37.835 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:11:37.835 23:55:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:37.835 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:37.835 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.083 ms 00:11:37.835 00:11:37.835 --- 10.0.0.4 ping statistics --- 00:11:37.835 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:37.835 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:11:37.835 23:55:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:37.835 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:37.835 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:11:37.835 00:11:37.835 --- 10.0.0.1 ping statistics --- 00:11:37.835 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:37.835 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:11:37.835 23:55:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:37.835 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:37.835 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.046 ms 00:11:37.835 00:11:37.835 --- 10.0.0.2 ping statistics --- 00:11:37.835 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:37.835 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:11:37.835 23:55:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:37.835 23:55:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@461 -- # return 0 00:11:37.835 23:55:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:37.835 23:55:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:37.835 23:55:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:37.835 23:55:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:37.835 23:55:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:37.835 23:55:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:37.835 23:55:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:37.835 23:55:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:11:37.835 23:55:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:37.835 23:55:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:37.835 23:55:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:37.835 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:37.835 23:55:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=67720 00:11:37.835 23:55:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 67720 00:11:37.835 23:55:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 67720 ']' 00:11:37.835 23:55:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:37.835 23:55:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:37.835 23:55:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:37.835 23:55:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:37.835 23:55:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:37.835 23:55:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:38.094 [2024-11-18 23:55:44.534754] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:11:38.094 [2024-11-18 23:55:44.534963] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:38.094 [2024-11-18 23:55:44.720530] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:38.352 [2024-11-18 23:55:44.806278] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:38.352 [2024-11-18 23:55:44.806345] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:38.352 [2024-11-18 23:55:44.806379] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:38.352 [2024-11-18 23:55:44.806401] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:38.352 [2024-11-18 23:55:44.806413] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:38.352 [2024-11-18 23:55:44.807622] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:38.352 [2024-11-18 23:55:44.958634] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:38.919 23:55:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:38.919 23:55:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:11:38.919 23:55:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:38.919 23:55:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:38.919 23:55:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:38.919 23:55:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:38.919 23:55:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:11:38.919 23:55:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:11:38.919 23:55:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.919 23:55:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:38.919 [2024-11-18 23:55:45.575871] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:38.919 23:55:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.919 23:55:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:38.919 23:55:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.919 23:55:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:38.919 23:55:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.920 23:55:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:38.920 23:55:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.920 23:55:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:38.920 [2024-11-18 23:55:45.596150] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:38.920 23:55:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.920 23:55:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:11:38.920 23:55:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.920 23:55:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:39.178 23:55:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.178 23:55:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:11:39.178 23:55:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.178 23:55:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:39.178 malloc0 00:11:39.178 23:55:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.178 23:55:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:11:39.178 23:55:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.178 23:55:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:39.178 23:55:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.178 23:55:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:11:39.178 23:55:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:11:39.178 23:55:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:11:39.178 23:55:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:11:39.178 23:55:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:39.178 23:55:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:39.178 { 00:11:39.178 "params": { 00:11:39.178 "name": "Nvme$subsystem", 00:11:39.178 "trtype": "$TEST_TRANSPORT", 00:11:39.178 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:39.178 "adrfam": "ipv4", 00:11:39.178 "trsvcid": "$NVMF_PORT", 00:11:39.178 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:39.178 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:39.178 "hdgst": ${hdgst:-false}, 00:11:39.178 "ddgst": ${ddgst:-false} 00:11:39.178 }, 00:11:39.178 "method": "bdev_nvme_attach_controller" 00:11:39.178 } 00:11:39.178 EOF 00:11:39.178 )") 00:11:39.178 23:55:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:11:39.178 23:55:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:11:39.178 23:55:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:11:39.178 23:55:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:39.178 "params": { 00:11:39.178 "name": "Nvme1", 00:11:39.178 "trtype": "tcp", 00:11:39.178 "traddr": "10.0.0.3", 00:11:39.178 "adrfam": "ipv4", 00:11:39.178 "trsvcid": "4420", 00:11:39.178 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:39.178 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:39.178 "hdgst": false, 00:11:39.178 "ddgst": false 00:11:39.178 }, 00:11:39.178 "method": "bdev_nvme_attach_controller" 00:11:39.178 }' 00:11:39.178 [2024-11-18 23:55:45.770440] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:11:39.178 [2024-11-18 23:55:45.770631] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67762 ] 00:11:39.436 [2024-11-18 23:55:45.958683] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:39.436 [2024-11-18 23:55:46.084351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:39.694 [2024-11-18 23:55:46.255942] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:39.952 Running I/O for 10 seconds... 00:11:41.823 5203.00 IOPS, 40.65 MiB/s [2024-11-18T23:55:49.449Z] 5161.50 IOPS, 40.32 MiB/s [2024-11-18T23:55:50.852Z] 5152.00 IOPS, 40.25 MiB/s [2024-11-18T23:55:51.789Z] 5156.25 IOPS, 40.28 MiB/s [2024-11-18T23:55:52.726Z] 5181.00 IOPS, 40.48 MiB/s [2024-11-18T23:55:53.663Z] 5189.67 IOPS, 40.54 MiB/s [2024-11-18T23:55:54.599Z] 5202.71 IOPS, 40.65 MiB/s [2024-11-18T23:55:55.548Z] 5216.12 IOPS, 40.75 MiB/s [2024-11-18T23:55:56.495Z] 5221.89 IOPS, 40.80 MiB/s [2024-11-18T23:55:56.495Z] 5222.00 IOPS, 40.80 MiB/s 00:11:49.803 Latency(us) 00:11:49.803 [2024-11-18T23:55:56.495Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:49.803 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:11:49.803 Verification LBA range: start 0x0 length 0x1000 00:11:49.803 Nvme1n1 : 10.02 5224.50 40.82 0.00 0.00 24433.76 3157.64 34078.72 00:11:49.803 [2024-11-18T23:55:56.495Z] =================================================================================================================== 00:11:49.803 [2024-11-18T23:55:56.495Z] Total : 5224.50 40.82 0.00 0.00 24433.76 3157.64 34078.72 00:11:50.740 23:55:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=67887 00:11:50.740 23:55:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:11:50.740 23:55:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:11:50.740 23:55:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:50.740 23:55:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:11:50.740 23:55:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:11:50.740 23:55:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:11:50.740 23:55:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:50.740 23:55:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:50.740 { 00:11:50.740 "params": { 00:11:50.740 "name": "Nvme$subsystem", 00:11:50.740 "trtype": "$TEST_TRANSPORT", 00:11:50.740 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:50.740 "adrfam": "ipv4", 00:11:50.740 "trsvcid": "$NVMF_PORT", 00:11:50.740 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:50.740 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:50.740 "hdgst": ${hdgst:-false}, 00:11:50.740 "ddgst": ${ddgst:-false} 00:11:50.740 }, 00:11:50.740 "method": "bdev_nvme_attach_controller" 00:11:50.740 } 00:11:50.740 EOF 00:11:50.740 )") 00:11:50.741 23:55:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:11:50.741 [2024-11-18 23:55:57.322307] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.741 [2024-11-18 23:55:57.322365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.741 23:55:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:11:50.741 23:55:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:11:50.741 23:55:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:50.741 "params": { 00:11:50.741 "name": "Nvme1", 00:11:50.741 "trtype": "tcp", 00:11:50.741 "traddr": "10.0.0.3", 00:11:50.741 "adrfam": "ipv4", 00:11:50.741 "trsvcid": "4420", 00:11:50.741 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:50.741 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:50.741 "hdgst": false, 00:11:50.741 "ddgst": false 00:11:50.741 }, 00:11:50.741 "method": "bdev_nvme_attach_controller" 00:11:50.741 }' 00:11:50.741 [2024-11-18 23:55:57.334210] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.741 [2024-11-18 23:55:57.334304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.741 [2024-11-18 23:55:57.346208] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.741 [2024-11-18 23:55:57.346250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.741 [2024-11-18 23:55:57.358187] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.741 [2024-11-18 23:55:57.358248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.741 [2024-11-18 23:55:57.370246] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.741 [2024-11-18 23:55:57.370287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.741 [2024-11-18 23:55:57.382225] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.741 [2024-11-18 23:55:57.382458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.741 [2024-11-18 23:55:57.394216] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.741 [2024-11-18 23:55:57.394255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.741 [2024-11-18 23:55:57.406237] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.741 [2024-11-18 23:55:57.406279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.741 [2024-11-18 23:55:57.407906] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:11:50.741 [2024-11-18 23:55:57.408080] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67887 ] 00:11:50.741 [2024-11-18 23:55:57.418260] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.741 [2024-11-18 23:55:57.418298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.001 [2024-11-18 23:55:57.430238] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.001 [2024-11-18 23:55:57.430452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.001 [2024-11-18 23:55:57.442260] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.001 [2024-11-18 23:55:57.442306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.001 [2024-11-18 23:55:57.454256] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.001 [2024-11-18 23:55:57.454311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.001 [2024-11-18 23:55:57.466277] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.001 [2024-11-18 23:55:57.466314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.001 [2024-11-18 23:55:57.478283] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.001 [2024-11-18 23:55:57.478486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.001 [2024-11-18 23:55:57.490298] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.001 [2024-11-18 23:55:57.490493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.001 [2024-11-18 23:55:57.502303] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.001 [2024-11-18 23:55:57.502506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.001 [2024-11-18 23:55:57.514311] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.001 [2024-11-18 23:55:57.514506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.001 [2024-11-18 23:55:57.526295] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.001 [2024-11-18 23:55:57.526495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.001 [2024-11-18 23:55:57.538311] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.001 [2024-11-18 23:55:57.538506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.001 [2024-11-18 23:55:57.550309] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.001 [2024-11-18 23:55:57.550517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.001 [2024-11-18 23:55:57.562355] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.001 [2024-11-18 23:55:57.562538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.001 [2024-11-18 23:55:57.574338] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.001 [2024-11-18 23:55:57.574558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.001 [2024-11-18 23:55:57.579792] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:51.001 [2024-11-18 23:55:57.586332] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.001 [2024-11-18 23:55:57.586517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.001 [2024-11-18 23:55:57.598413] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.001 [2024-11-18 23:55:57.598723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.001 [2024-11-18 23:55:57.610359] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.001 [2024-11-18 23:55:57.610543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.001 [2024-11-18 23:55:57.622352] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.001 [2024-11-18 23:55:57.622559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.001 [2024-11-18 23:55:57.634373] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.001 [2024-11-18 23:55:57.634558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.001 [2024-11-18 23:55:57.646368] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.001 [2024-11-18 23:55:57.646583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.001 [2024-11-18 23:55:57.658381] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.001 [2024-11-18 23:55:57.658590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.001 [2024-11-18 23:55:57.670380] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.001 [2024-11-18 23:55:57.670586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.001 [2024-11-18 23:55:57.673702] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:51.001 [2024-11-18 23:55:57.682388] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.001 [2024-11-18 23:55:57.682557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.261 [2024-11-18 23:55:57.694449] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.261 [2024-11-18 23:55:57.694542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.261 [2024-11-18 23:55:57.706481] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.261 [2024-11-18 23:55:57.706686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.261 [2024-11-18 23:55:57.718412] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.261 [2024-11-18 23:55:57.718473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.261 [2024-11-18 23:55:57.730472] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.261 [2024-11-18 23:55:57.730514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.261 [2024-11-18 23:55:57.742433] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.261 [2024-11-18 23:55:57.742495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.261 [2024-11-18 23:55:57.754485] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.261 [2024-11-18 23:55:57.754811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.261 [2024-11-18 23:55:57.766481] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.261 [2024-11-18 23:55:57.766553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.261 [2024-11-18 23:55:57.778488] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.261 [2024-11-18 23:55:57.778530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.261 [2024-11-18 23:55:57.790463] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.262 [2024-11-18 23:55:57.790507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.262 [2024-11-18 23:55:57.802445] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.262 [2024-11-18 23:55:57.802480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.262 [2024-11-18 23:55:57.814434] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.262 [2024-11-18 23:55:57.814661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.262 [2024-11-18 23:55:57.826456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.262 [2024-11-18 23:55:57.826491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.262 [2024-11-18 23:55:57.838437] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.262 [2024-11-18 23:55:57.838643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.262 [2024-11-18 23:55:57.850487] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.262 [2024-11-18 23:55:57.850523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.262 [2024-11-18 23:55:57.862463] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.262 [2024-11-18 23:55:57.862502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.262 [2024-11-18 23:55:57.863267] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:51.262 [2024-11-18 23:55:57.874542] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.262 [2024-11-18 23:55:57.874594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.262 [2024-11-18 23:55:57.886517] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.262 [2024-11-18 23:55:57.886576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.262 [2024-11-18 23:55:57.898470] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.262 [2024-11-18 23:55:57.898508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.262 [2024-11-18 23:55:57.910459] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.262 [2024-11-18 23:55:57.910518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.262 [2024-11-18 23:55:57.922481] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.262 [2024-11-18 23:55:57.922718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.262 [2024-11-18 23:55:57.934508] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.262 [2024-11-18 23:55:57.934714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.262 [2024-11-18 23:55:57.946531] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.262 [2024-11-18 23:55:57.946728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.521 [2024-11-18 23:55:57.958540] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.521 [2024-11-18 23:55:57.958777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.521 [2024-11-18 23:55:57.970533] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.522 [2024-11-18 23:55:57.970781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.522 [2024-11-18 23:55:57.982520] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.522 [2024-11-18 23:55:57.982744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.522 [2024-11-18 23:55:57.994579] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.522 [2024-11-18 23:55:57.994843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.522 [2024-11-18 23:55:58.006567] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.522 [2024-11-18 23:55:58.006809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.522 [2024-11-18 23:55:58.018571] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.522 [2024-11-18 23:55:58.018825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.522 [2024-11-18 23:55:58.030643] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.522 [2024-11-18 23:55:58.030886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.522 [2024-11-18 23:55:58.042666] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.522 [2024-11-18 23:55:58.042850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.522 Running I/O for 5 seconds... 00:11:51.522 [2024-11-18 23:55:58.061541] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.522 [2024-11-18 23:55:58.061785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.522 [2024-11-18 23:55:58.077089] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.522 [2024-11-18 23:55:58.077295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.522 [2024-11-18 23:55:58.088020] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.522 [2024-11-18 23:55:58.088221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.522 [2024-11-18 23:55:58.103840] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.522 [2024-11-18 23:55:58.104052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.522 [2024-11-18 23:55:58.119010] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.522 [2024-11-18 23:55:58.119223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.522 [2024-11-18 23:55:58.134200] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.522 [2024-11-18 23:55:58.134409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.522 [2024-11-18 23:55:58.150649] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.522 [2024-11-18 23:55:58.150726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.522 [2024-11-18 23:55:58.167590] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.522 [2024-11-18 23:55:58.167656] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.522 [2024-11-18 23:55:58.183898] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.522 [2024-11-18 23:55:58.183984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.522 [2024-11-18 23:55:58.201261] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.522 [2024-11-18 23:55:58.201469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.784 [2024-11-18 23:55:58.216748] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.784 [2024-11-18 23:55:58.216812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.784 [2024-11-18 23:55:58.233018] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.784 [2024-11-18 23:55:58.233076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.784 [2024-11-18 23:55:58.249751] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.784 [2024-11-18 23:55:58.249812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.784 [2024-11-18 23:55:58.266348] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.784 [2024-11-18 23:55:58.266390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.784 [2024-11-18 23:55:58.281223] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.784 [2024-11-18 23:55:58.281287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.784 [2024-11-18 23:55:58.297659] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.784 [2024-11-18 23:55:58.297698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.784 [2024-11-18 23:55:58.312420] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.784 [2024-11-18 23:55:58.312482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.784 [2024-11-18 23:55:58.328326] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.784 [2024-11-18 23:55:58.328367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.784 [2024-11-18 23:55:58.344859] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.784 [2024-11-18 23:55:58.345109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.784 [2024-11-18 23:55:58.361854] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.784 [2024-11-18 23:55:58.361897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.784 [2024-11-18 23:55:58.379091] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.784 [2024-11-18 23:55:58.379152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.784 [2024-11-18 23:55:58.393899] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.784 [2024-11-18 23:55:58.393940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.784 [2024-11-18 23:55:58.410372] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.784 [2024-11-18 23:55:58.410418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.784 [2024-11-18 23:55:58.426063] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.784 [2024-11-18 23:55:58.426104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.784 [2024-11-18 23:55:58.437034] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.784 [2024-11-18 23:55:58.437095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.784 [2024-11-18 23:55:58.452816] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.784 [2024-11-18 23:55:58.452857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.784 [2024-11-18 23:55:58.467582] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.784 [2024-11-18 23:55:58.467689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.042 [2024-11-18 23:55:58.483613] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.042 [2024-11-18 23:55:58.483681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.042 [2024-11-18 23:55:58.494813] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.042 [2024-11-18 23:55:58.494874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.042 [2024-11-18 23:55:58.510857] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.042 [2024-11-18 23:55:58.510899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.042 [2024-11-18 23:55:58.525736] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.042 [2024-11-18 23:55:58.525798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.042 [2024-11-18 23:55:58.542735] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.042 [2024-11-18 23:55:58.542775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.042 [2024-11-18 23:55:58.559064] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.042 [2024-11-18 23:55:58.559125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.042 [2024-11-18 23:55:58.576177] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.042 [2024-11-18 23:55:58.576219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.042 [2024-11-18 23:55:58.591788] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.042 [2024-11-18 23:55:58.591835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.042 [2024-11-18 23:55:58.602320] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.043 [2024-11-18 23:55:58.602361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.043 [2024-11-18 23:55:58.619065] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.043 [2024-11-18 23:55:58.619127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.043 [2024-11-18 23:55:58.633950] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.043 [2024-11-18 23:55:58.633993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.043 [2024-11-18 23:55:58.649332] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.043 [2024-11-18 23:55:58.649553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.043 [2024-11-18 23:55:58.661114] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.043 [2024-11-18 23:55:58.661155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.043 [2024-11-18 23:55:58.676863] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.043 [2024-11-18 23:55:58.676923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.043 [2024-11-18 23:55:58.692718] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.043 [2024-11-18 23:55:58.692758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.043 [2024-11-18 23:55:58.709283] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.043 [2024-11-18 23:55:58.709345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.043 [2024-11-18 23:55:58.727308] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.043 [2024-11-18 23:55:58.727368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.301 [2024-11-18 23:55:58.744077] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.301 [2024-11-18 23:55:58.744125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.301 [2024-11-18 23:55:58.756596] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.301 [2024-11-18 23:55:58.756663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.301 [2024-11-18 23:55:58.768577] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.301 [2024-11-18 23:55:58.768684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.301 [2024-11-18 23:55:58.783795] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.301 [2024-11-18 23:55:58.783836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.301 [2024-11-18 23:55:58.799732] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.301 [2024-11-18 23:55:58.799794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.301 [2024-11-18 23:55:58.818144] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.301 [2024-11-18 23:55:58.818188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.301 [2024-11-18 23:55:58.831569] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.301 [2024-11-18 23:55:58.831666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.301 [2024-11-18 23:55:58.849338] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.301 [2024-11-18 23:55:58.849554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.301 [2024-11-18 23:55:58.865962] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.301 [2024-11-18 23:55:58.866023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.301 [2024-11-18 23:55:58.882880] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.301 [2024-11-18 23:55:58.882923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.301 [2024-11-18 23:55:58.900059] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.301 [2024-11-18 23:55:58.900272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.301 [2024-11-18 23:55:58.913161] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.301 [2024-11-18 23:55:58.913218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.301 [2024-11-18 23:55:58.931636] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.301 [2024-11-18 23:55:58.931710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.301 [2024-11-18 23:55:58.946334] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.301 [2024-11-18 23:55:58.946389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.301 [2024-11-18 23:55:58.963587] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.301 [2024-11-18 23:55:58.963652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.301 [2024-11-18 23:55:58.980891] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.301 [2024-11-18 23:55:58.980947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.560 [2024-11-18 23:55:58.995757] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.560 [2024-11-18 23:55:58.995802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.560 [2024-11-18 23:55:59.010594] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.560 [2024-11-18 23:55:59.010664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.560 [2024-11-18 23:55:59.027595] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.560 [2024-11-18 23:55:59.027684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.560 [2024-11-18 23:55:59.042073] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.560 [2024-11-18 23:55:59.042117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.560 9832.00 IOPS, 76.81 MiB/s [2024-11-18T23:55:59.252Z] [2024-11-18 23:55:59.058108] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.560 [2024-11-18 23:55:59.058169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.560 [2024-11-18 23:55:59.073867] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.560 [2024-11-18 23:55:59.073910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.560 [2024-11-18 23:55:59.084428] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.560 [2024-11-18 23:55:59.084505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.560 [2024-11-18 23:55:59.100706] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.560 [2024-11-18 23:55:59.100764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.560 [2024-11-18 23:55:59.115580] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.560 [2024-11-18 23:55:59.115651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.560 [2024-11-18 23:55:59.131773] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.560 [2024-11-18 23:55:59.131830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.560 [2024-11-18 23:55:59.147180] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.560 [2024-11-18 23:55:59.147242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.560 [2024-11-18 23:55:59.163753] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.560 [2024-11-18 23:55:59.163795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.560 [2024-11-18 23:55:59.180314] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.560 [2024-11-18 23:55:59.180391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.560 [2024-11-18 23:55:59.197127] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.560 [2024-11-18 23:55:59.197201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.560 [2024-11-18 23:55:59.213861] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.560 [2024-11-18 23:55:59.213940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.560 [2024-11-18 23:55:59.229872] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.560 [2024-11-18 23:55:59.229914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.560 [2024-11-18 23:55:59.245913] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.560 [2024-11-18 23:55:59.245962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.819 [2024-11-18 23:55:59.261301] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.819 [2024-11-18 23:55:59.261359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.819 [2024-11-18 23:55:59.277755] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.819 [2024-11-18 23:55:59.277801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.819 [2024-11-18 23:55:59.293737] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.819 [2024-11-18 23:55:59.293794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.819 [2024-11-18 23:55:59.304356] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.819 [2024-11-18 23:55:59.304417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.819 [2024-11-18 23:55:59.319451] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.819 [2024-11-18 23:55:59.319508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.819 [2024-11-18 23:55:59.336710] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.819 [2024-11-18 23:55:59.336755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.819 [2024-11-18 23:55:59.352147] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.819 [2024-11-18 23:55:59.352189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.819 [2024-11-18 23:55:59.362896] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.819 [2024-11-18 23:55:59.362941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.819 [2024-11-18 23:55:59.379357] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.819 [2024-11-18 23:55:59.379415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.819 [2024-11-18 23:55:59.394529] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.819 [2024-11-18 23:55:59.394606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.819 [2024-11-18 23:55:59.410865] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.819 [2024-11-18 23:55:59.410923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.819 [2024-11-18 23:55:59.428102] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.819 [2024-11-18 23:55:59.428163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.819 [2024-11-18 23:55:59.442531] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.819 [2024-11-18 23:55:59.442589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.819 [2024-11-18 23:55:59.457524] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.819 [2024-11-18 23:55:59.457603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.819 [2024-11-18 23:55:59.473586] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.819 [2024-11-18 23:55:59.473675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.819 [2024-11-18 23:55:59.491007] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.819 [2024-11-18 23:55:59.491068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.819 [2024-11-18 23:55:59.506221] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.819 [2024-11-18 23:55:59.506280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.078 [2024-11-18 23:55:59.521951] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.078 [2024-11-18 23:55:59.522040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.078 [2024-11-18 23:55:59.538850] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.078 [2024-11-18 23:55:59.538906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.078 [2024-11-18 23:55:59.554712] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.078 [2024-11-18 23:55:59.554755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.078 [2024-11-18 23:55:59.571619] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.078 [2024-11-18 23:55:59.571700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.078 [2024-11-18 23:55:59.588021] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.078 [2024-11-18 23:55:59.588064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.078 [2024-11-18 23:55:59.605283] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.078 [2024-11-18 23:55:59.605344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.078 [2024-11-18 23:55:59.621221] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.078 [2024-11-18 23:55:59.621278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.078 [2024-11-18 23:55:59.631581] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.078 [2024-11-18 23:55:59.631648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.078 [2024-11-18 23:55:59.648256] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.078 [2024-11-18 23:55:59.648359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.078 [2024-11-18 23:55:59.664593] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.078 [2024-11-18 23:55:59.664680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.078 [2024-11-18 23:55:59.681397] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.078 [2024-11-18 23:55:59.681484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.078 [2024-11-18 23:55:59.695915] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.079 [2024-11-18 23:55:59.695998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.079 [2024-11-18 23:55:59.711674] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.079 [2024-11-18 23:55:59.711712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.079 [2024-11-18 23:55:59.722148] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.079 [2024-11-18 23:55:59.722205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.079 [2024-11-18 23:55:59.737786] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.079 [2024-11-18 23:55:59.737828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.079 [2024-11-18 23:55:59.752360] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.079 [2024-11-18 23:55:59.752416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.338 [2024-11-18 23:55:59.768667] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.338 [2024-11-18 23:55:59.768725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.338 [2024-11-18 23:55:59.784537] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.338 [2024-11-18 23:55:59.784623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.338 [2024-11-18 23:55:59.800874] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.338 [2024-11-18 23:55:59.800918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.338 [2024-11-18 23:55:59.818060] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.338 [2024-11-18 23:55:59.818118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.338 [2024-11-18 23:55:59.834775] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.338 [2024-11-18 23:55:59.834817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.338 [2024-11-18 23:55:59.851126] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.338 [2024-11-18 23:55:59.851183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.338 [2024-11-18 23:55:59.868575] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.338 [2024-11-18 23:55:59.868644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.338 [2024-11-18 23:55:59.883865] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.338 [2024-11-18 23:55:59.883908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.338 [2024-11-18 23:55:59.900369] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.338 [2024-11-18 23:55:59.900426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.338 [2024-11-18 23:55:59.917232] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.338 [2024-11-18 23:55:59.917288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.338 [2024-11-18 23:55:59.933160] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.338 [2024-11-18 23:55:59.933219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.338 [2024-11-18 23:55:59.949285] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.338 [2024-11-18 23:55:59.949344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.338 [2024-11-18 23:55:59.960992] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.338 [2024-11-18 23:55:59.961049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.338 [2024-11-18 23:55:59.978264] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.338 [2024-11-18 23:55:59.978322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.339 [2024-11-18 23:55:59.992799] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.339 [2024-11-18 23:55:59.992860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.339 [2024-11-18 23:56:00.010027] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.339 [2024-11-18 23:56:00.010089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.339 [2024-11-18 23:56:00.023157] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.339 [2024-11-18 23:56:00.023231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.598 [2024-11-18 23:56:00.040747] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.598 [2024-11-18 23:56:00.040804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.598 9964.00 IOPS, 77.84 MiB/s [2024-11-18T23:56:00.290Z] [2024-11-18 23:56:00.057116] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.598 [2024-11-18 23:56:00.057172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.598 [2024-11-18 23:56:00.072772] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.598 [2024-11-18 23:56:00.072828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.598 [2024-11-18 23:56:00.083451] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.598 [2024-11-18 23:56:00.083507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.598 [2024-11-18 23:56:00.099809] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.598 [2024-11-18 23:56:00.099864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.598 [2024-11-18 23:56:00.115093] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.598 [2024-11-18 23:56:00.115152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.598 [2024-11-18 23:56:00.126270] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.598 [2024-11-18 23:56:00.126328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.598 [2024-11-18 23:56:00.139816] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.598 [2024-11-18 23:56:00.139873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.598 [2024-11-18 23:56:00.156163] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.598 [2024-11-18 23:56:00.156205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.598 [2024-11-18 23:56:00.172865] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.598 [2024-11-18 23:56:00.172924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.598 [2024-11-18 23:56:00.189910] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.598 [2024-11-18 23:56:00.189968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.598 [2024-11-18 23:56:00.204443] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.598 [2024-11-18 23:56:00.204500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.598 [2024-11-18 23:56:00.221542] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.598 [2024-11-18 23:56:00.221626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.598 [2024-11-18 23:56:00.237930] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.598 [2024-11-18 23:56:00.238020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.598 [2024-11-18 23:56:00.254387] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.598 [2024-11-18 23:56:00.254445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.598 [2024-11-18 23:56:00.271393] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.598 [2024-11-18 23:56:00.271450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.857 [2024-11-18 23:56:00.288676] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.857 [2024-11-18 23:56:00.288729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.857 [2024-11-18 23:56:00.303658] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.857 [2024-11-18 23:56:00.303698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.857 [2024-11-18 23:56:00.319986] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.857 [2024-11-18 23:56:00.320034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.857 [2024-11-18 23:56:00.336999] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.857 [2024-11-18 23:56:00.337057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.857 [2024-11-18 23:56:00.353702] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.857 [2024-11-18 23:56:00.353759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.857 [2024-11-18 23:56:00.370685] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.857 [2024-11-18 23:56:00.370723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.857 [2024-11-18 23:56:00.386706] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.857 [2024-11-18 23:56:00.386749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.857 [2024-11-18 23:56:00.403737] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.857 [2024-11-18 23:56:00.403793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.857 [2024-11-18 23:56:00.419780] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.857 [2024-11-18 23:56:00.419839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.857 [2024-11-18 23:56:00.436096] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.857 [2024-11-18 23:56:00.436140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.857 [2024-11-18 23:56:00.453294] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.857 [2024-11-18 23:56:00.453351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.857 [2024-11-18 23:56:00.468684] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.857 [2024-11-18 23:56:00.468741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.858 [2024-11-18 23:56:00.485240] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.858 [2024-11-18 23:56:00.485298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.858 [2024-11-18 23:56:00.501356] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.858 [2024-11-18 23:56:00.501415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.858 [2024-11-18 23:56:00.511937] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.858 [2024-11-18 23:56:00.512008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.858 [2024-11-18 23:56:00.528383] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.858 [2024-11-18 23:56:00.528440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.858 [2024-11-18 23:56:00.544043] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.858 [2024-11-18 23:56:00.544090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.116 [2024-11-18 23:56:00.560720] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.116 [2024-11-18 23:56:00.560776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.116 [2024-11-18 23:56:00.578248] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.116 [2024-11-18 23:56:00.578306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.116 [2024-11-18 23:56:00.594826] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.116 [2024-11-18 23:56:00.594869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.116 [2024-11-18 23:56:00.611670] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.116 [2024-11-18 23:56:00.611709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.116 [2024-11-18 23:56:00.628724] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.116 [2024-11-18 23:56:00.628764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.116 [2024-11-18 23:56:00.645241] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.116 [2024-11-18 23:56:00.645301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.116 [2024-11-18 23:56:00.662462] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.116 [2024-11-18 23:56:00.662518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.116 [2024-11-18 23:56:00.678825] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.116 [2024-11-18 23:56:00.678867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.116 [2024-11-18 23:56:00.695327] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.116 [2024-11-18 23:56:00.695386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.116 [2024-11-18 23:56:00.712167] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.116 [2024-11-18 23:56:00.712210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.116 [2024-11-18 23:56:00.729466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.116 [2024-11-18 23:56:00.729539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.116 [2024-11-18 23:56:00.746033] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.116 [2024-11-18 23:56:00.746091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.116 [2024-11-18 23:56:00.762201] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.116 [2024-11-18 23:56:00.762258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.117 [2024-11-18 23:56:00.772808] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.117 [2024-11-18 23:56:00.772850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.117 [2024-11-18 23:56:00.789108] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.117 [2024-11-18 23:56:00.789168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.376 [2024-11-18 23:56:00.806764] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.376 [2024-11-18 23:56:00.806814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.376 [2024-11-18 23:56:00.822148] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.376 [2024-11-18 23:56:00.822206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.376 [2024-11-18 23:56:00.838525] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.376 [2024-11-18 23:56:00.838583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.377 [2024-11-18 23:56:00.855658] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.377 [2024-11-18 23:56:00.855745] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.377 [2024-11-18 23:56:00.872041] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.377 [2024-11-18 23:56:00.872100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.377 [2024-11-18 23:56:00.888336] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.377 [2024-11-18 23:56:00.888394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.377 [2024-11-18 23:56:00.905034] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.377 [2024-11-18 23:56:00.905095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.377 [2024-11-18 23:56:00.918103] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.377 [2024-11-18 23:56:00.918161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.377 [2024-11-18 23:56:00.936157] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.377 [2024-11-18 23:56:00.936203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.377 [2024-11-18 23:56:00.951915] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.377 [2024-11-18 23:56:00.951988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.377 [2024-11-18 23:56:00.963674] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.377 [2024-11-18 23:56:00.963716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.377 [2024-11-18 23:56:00.979274] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.377 [2024-11-18 23:56:00.979332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.377 [2024-11-18 23:56:00.994350] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.377 [2024-11-18 23:56:00.994408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.377 [2024-11-18 23:56:01.007337] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.377 [2024-11-18 23:56:01.007399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.377 [2024-11-18 23:56:01.027304] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.377 [2024-11-18 23:56:01.027364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.377 [2024-11-18 23:56:01.041340] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.377 [2024-11-18 23:56:01.041399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.377 9989.33 IOPS, 78.04 MiB/s [2024-11-18T23:56:01.069Z] [2024-11-18 23:56:01.058691] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.377 [2024-11-18 23:56:01.058766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.635 [2024-11-18 23:56:01.074824] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.635 [2024-11-18 23:56:01.074881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.635 [2024-11-18 23:56:01.089957] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.635 [2024-11-18 23:56:01.090030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.635 [2024-11-18 23:56:01.106515] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.635 [2024-11-18 23:56:01.106574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.635 [2024-11-18 23:56:01.123593] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.635 [2024-11-18 23:56:01.123683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.635 [2024-11-18 23:56:01.136377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.636 [2024-11-18 23:56:01.136433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.636 [2024-11-18 23:56:01.153927] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.636 [2024-11-18 23:56:01.154143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.636 [2024-11-18 23:56:01.169588] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.636 [2024-11-18 23:56:01.169813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.636 [2024-11-18 23:56:01.180920] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.636 [2024-11-18 23:56:01.181107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.636 [2024-11-18 23:56:01.195110] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.636 [2024-11-18 23:56:01.195149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.636 [2024-11-18 23:56:01.210762] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.636 [2024-11-18 23:56:01.210953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.636 [2024-11-18 23:56:01.226263] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.636 [2024-11-18 23:56:01.226454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.636 [2024-11-18 23:56:01.242081] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.636 [2024-11-18 23:56:01.242274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.636 [2024-11-18 23:56:01.258122] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.636 [2024-11-18 23:56:01.258313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.636 [2024-11-18 23:56:01.275904] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.636 [2024-11-18 23:56:01.276135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.636 [2024-11-18 23:56:01.291789] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.636 [2024-11-18 23:56:01.291997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.636 [2024-11-18 23:56:01.303393] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.636 [2024-11-18 23:56:01.303628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.636 [2024-11-18 23:56:01.320916] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.636 [2024-11-18 23:56:01.321165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.895 [2024-11-18 23:56:01.337984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.895 [2024-11-18 23:56:01.338228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.895 [2024-11-18 23:56:01.354361] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.895 [2024-11-18 23:56:01.354404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.895 [2024-11-18 23:56:01.369569] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.895 [2024-11-18 23:56:01.369801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.895 [2024-11-18 23:56:01.386629] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.895 [2024-11-18 23:56:01.386701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.895 [2024-11-18 23:56:01.401698] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.895 [2024-11-18 23:56:01.401741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.895 [2024-11-18 23:56:01.417369] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.895 [2024-11-18 23:56:01.417599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.895 [2024-11-18 23:56:01.429030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.895 [2024-11-18 23:56:01.429072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.895 [2024-11-18 23:56:01.444804] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.895 [2024-11-18 23:56:01.444844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.895 [2024-11-18 23:56:01.461169] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.895 [2024-11-18 23:56:01.461211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.895 [2024-11-18 23:56:01.478235] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.895 [2024-11-18 23:56:01.478277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.895 [2024-11-18 23:56:01.494810] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.895 [2024-11-18 23:56:01.494852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.895 [2024-11-18 23:56:01.511253] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.895 [2024-11-18 23:56:01.511295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.895 [2024-11-18 23:56:01.528763] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.895 [2024-11-18 23:56:01.528816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.895 [2024-11-18 23:56:01.545350] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.895 [2024-11-18 23:56:01.545392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.895 [2024-11-18 23:56:01.561916] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.895 [2024-11-18 23:56:01.561959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.895 [2024-11-18 23:56:01.579682] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.895 [2024-11-18 23:56:01.579758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.155 [2024-11-18 23:56:01.595366] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.155 [2024-11-18 23:56:01.595579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.155 [2024-11-18 23:56:01.611009] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.155 [2024-11-18 23:56:01.611050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.155 [2024-11-18 23:56:01.627395] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.155 [2024-11-18 23:56:01.627436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.155 [2024-11-18 23:56:01.643836] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.155 [2024-11-18 23:56:01.643877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.155 [2024-11-18 23:56:01.661604] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.155 [2024-11-18 23:56:01.661675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.155 [2024-11-18 23:56:01.677798] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.155 [2024-11-18 23:56:01.677855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.155 [2024-11-18 23:56:01.694493] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.155 [2024-11-18 23:56:01.694537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.155 [2024-11-18 23:56:01.710425] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.155 [2024-11-18 23:56:01.710467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.155 [2024-11-18 23:56:01.721464] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.155 [2024-11-18 23:56:01.721668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.155 [2024-11-18 23:56:01.737371] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.155 [2024-11-18 23:56:01.737567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.155 [2024-11-18 23:56:01.752635] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.155 [2024-11-18 23:56:01.752873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.155 [2024-11-18 23:56:01.767722] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.155 [2024-11-18 23:56:01.767764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.155 [2024-11-18 23:56:01.784638] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.155 [2024-11-18 23:56:01.784709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.155 [2024-11-18 23:56:01.800315] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.155 [2024-11-18 23:56:01.800358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.155 [2024-11-18 23:56:01.811557] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.155 [2024-11-18 23:56:01.811805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.155 [2024-11-18 23:56:01.829140] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.155 [2024-11-18 23:56:01.829184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.415 [2024-11-18 23:56:01.845741] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.415 [2024-11-18 23:56:01.845820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.415 [2024-11-18 23:56:01.862108] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.415 [2024-11-18 23:56:01.862150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.415 [2024-11-18 23:56:01.878536] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.415 [2024-11-18 23:56:01.878580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.415 [2024-11-18 23:56:01.894670] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.415 [2024-11-18 23:56:01.894712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.415 [2024-11-18 23:56:01.905326] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.415 [2024-11-18 23:56:01.905367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.415 [2024-11-18 23:56:01.921714] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.415 [2024-11-18 23:56:01.921756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.415 [2024-11-18 23:56:01.937467] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.415 [2024-11-18 23:56:01.937702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.415 [2024-11-18 23:56:01.954273] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.415 [2024-11-18 23:56:01.954316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.415 [2024-11-18 23:56:01.970696] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.415 [2024-11-18 23:56:01.970738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.415 [2024-11-18 23:56:01.988665] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.415 [2024-11-18 23:56:01.988738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.415 [2024-11-18 23:56:02.005290] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.415 [2024-11-18 23:56:02.005333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.415 [2024-11-18 23:56:02.021704] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.415 [2024-11-18 23:56:02.021747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.415 [2024-11-18 23:56:02.034062] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.415 [2024-11-18 23:56:02.034103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.415 9965.50 IOPS, 77.86 MiB/s [2024-11-18T23:56:02.107Z] [2024-11-18 23:56:02.052233] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.415 [2024-11-18 23:56:02.052489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.415 [2024-11-18 23:56:02.067776] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.415 [2024-11-18 23:56:02.067818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.415 [2024-11-18 23:56:02.083300] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.415 [2024-11-18 23:56:02.083510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.415 [2024-11-18 23:56:02.094119] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.415 [2024-11-18 23:56:02.094313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.674 [2024-11-18 23:56:02.110947] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.674 [2024-11-18 23:56:02.110990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.674 [2024-11-18 23:56:02.126262] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.674 [2024-11-18 23:56:02.126304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.674 [2024-11-18 23:56:02.144261] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.674 [2024-11-18 23:56:02.144320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.674 [2024-11-18 23:56:02.160524] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.674 [2024-11-18 23:56:02.160570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.674 [2024-11-18 23:56:02.171825] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.674 [2024-11-18 23:56:02.171868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.674 [2024-11-18 23:56:02.187785] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.674 [2024-11-18 23:56:02.187828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.674 [2024-11-18 23:56:02.203252] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.674 [2024-11-18 23:56:02.203295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.674 [2024-11-18 23:56:02.214553] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.674 [2024-11-18 23:56:02.214642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.674 [2024-11-18 23:56:02.231420] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.674 [2024-11-18 23:56:02.231466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.674 [2024-11-18 23:56:02.248729] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.674 [2024-11-18 23:56:02.248771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.674 [2024-11-18 23:56:02.264917] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.674 [2024-11-18 23:56:02.265142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.674 [2024-11-18 23:56:02.280426] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.674 [2024-11-18 23:56:02.280646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.674 [2024-11-18 23:56:02.297465] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.674 [2024-11-18 23:56:02.297690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.674 [2024-11-18 23:56:02.314200] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.674 [2024-11-18 23:56:02.314393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.674 [2024-11-18 23:56:02.331097] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.674 [2024-11-18 23:56:02.331290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.674 [2024-11-18 23:56:02.348533] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.674 [2024-11-18 23:56:02.348746] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.933 [2024-11-18 23:56:02.365690] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.933 [2024-11-18 23:56:02.365935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.934 [2024-11-18 23:56:02.381377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.934 [2024-11-18 23:56:02.381572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.934 [2024-11-18 23:56:02.392998] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.934 [2024-11-18 23:56:02.393190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.934 [2024-11-18 23:56:02.409567] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.934 [2024-11-18 23:56:02.409794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.934 [2024-11-18 23:56:02.424715] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.934 [2024-11-18 23:56:02.424899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.934 [2024-11-18 23:56:02.441547] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.934 [2024-11-18 23:56:02.441771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.934 [2024-11-18 23:56:02.459157] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.934 [2024-11-18 23:56:02.459348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.934 [2024-11-18 23:56:02.473964] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.934 [2024-11-18 23:56:02.474174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.934 [2024-11-18 23:56:02.490815] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.934 [2024-11-18 23:56:02.491040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.934 [2024-11-18 23:56:02.505571] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.934 [2024-11-18 23:56:02.505797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.934 [2024-11-18 23:56:02.522051] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.934 [2024-11-18 23:56:02.522275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.934 [2024-11-18 23:56:02.539491] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.934 [2024-11-18 23:56:02.539716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.934 [2024-11-18 23:56:02.556213] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.934 [2024-11-18 23:56:02.556451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.934 [2024-11-18 23:56:02.572168] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.934 [2024-11-18 23:56:02.572410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.934 [2024-11-18 23:56:02.589364] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.934 [2024-11-18 23:56:02.589573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.934 [2024-11-18 23:56:02.605944] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:55.934 [2024-11-18 23:56:02.606153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.193 [2024-11-18 23:56:02.623076] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.193 [2024-11-18 23:56:02.623269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.193 [2024-11-18 23:56:02.639852] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.193 [2024-11-18 23:56:02.640065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.193 [2024-11-18 23:56:02.657396] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.193 [2024-11-18 23:56:02.657589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.193 [2024-11-18 23:56:02.673132] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.193 [2024-11-18 23:56:02.673326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.193 [2024-11-18 23:56:02.689879] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.193 [2024-11-18 23:56:02.690056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.193 [2024-11-18 23:56:02.707325] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.193 [2024-11-18 23:56:02.707517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.193 [2024-11-18 23:56:02.723467] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.193 [2024-11-18 23:56:02.723725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.193 [2024-11-18 23:56:02.740797] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.193 [2024-11-18 23:56:02.741005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.193 [2024-11-18 23:56:02.757170] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.193 [2024-11-18 23:56:02.757361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.193 [2024-11-18 23:56:02.767717] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.193 [2024-11-18 23:56:02.767879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.193 [2024-11-18 23:56:02.783928] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.193 [2024-11-18 23:56:02.784153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.193 [2024-11-18 23:56:02.799555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.193 [2024-11-18 23:56:02.799794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.193 [2024-11-18 23:56:02.816534] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.193 [2024-11-18 23:56:02.816757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.193 [2024-11-18 23:56:02.834168] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.193 [2024-11-18 23:56:02.834379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.193 [2024-11-18 23:56:02.849846] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.193 [2024-11-18 23:56:02.850098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.193 [2024-11-18 23:56:02.866582] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.193 [2024-11-18 23:56:02.866813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.452 [2024-11-18 23:56:02.882792] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.452 [2024-11-18 23:56:02.882992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.452 [2024-11-18 23:56:02.899086] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.452 [2024-11-18 23:56:02.899127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.452 [2024-11-18 23:56:02.916639] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.452 [2024-11-18 23:56:02.916692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.452 [2024-11-18 23:56:02.932514] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.452 [2024-11-18 23:56:02.932557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.452 [2024-11-18 23:56:02.949903] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.452 [2024-11-18 23:56:02.949946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.452 [2024-11-18 23:56:02.965333] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.452 [2024-11-18 23:56:02.965527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.452 [2024-11-18 23:56:02.981906] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.452 [2024-11-18 23:56:02.981948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.452 [2024-11-18 23:56:02.998841] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.452 [2024-11-18 23:56:02.998898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.452 [2024-11-18 23:56:03.016099] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.452 [2024-11-18 23:56:03.016294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.452 [2024-11-18 23:56:03.031982] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.452 [2024-11-18 23:56:03.032030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.452 [2024-11-18 23:56:03.043399] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.452 [2024-11-18 23:56:03.043638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.452 9975.60 IOPS, 77.93 MiB/s [2024-11-18T23:56:03.144Z] [2024-11-18 23:56:03.059178] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.452 [2024-11-18 23:56:03.059220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.452 00:11:56.452 Latency(us) 00:11:56.452 [2024-11-18T23:56:03.144Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:56.452 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:11:56.452 Nvme1n1 : 5.01 9974.37 77.92 0.00 0.00 12811.62 4140.68 22997.18 00:11:56.452 [2024-11-18T23:56:03.144Z] =================================================================================================================== 00:11:56.452 [2024-11-18T23:56:03.144Z] Total : 9974.37 77.92 0.00 0.00 12811.62 4140.68 22997.18 00:11:56.452 [2024-11-18 23:56:03.070063] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.452 [2024-11-18 23:56:03.070104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.453 [2024-11-18 23:56:03.082096] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.453 [2024-11-18 23:56:03.082153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.453 [2024-11-18 23:56:03.094072] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.453 [2024-11-18 23:56:03.094112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.453 [2024-11-18 23:56:03.106105] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.453 [2024-11-18 23:56:03.106384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.453 [2024-11-18 23:56:03.118128] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.453 [2024-11-18 23:56:03.118174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.453 [2024-11-18 23:56:03.130088] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.453 [2024-11-18 23:56:03.130128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.712 [2024-11-18 23:56:03.142073] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.712 [2024-11-18 23:56:03.142111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.712 [2024-11-18 23:56:03.154087] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.712 [2024-11-18 23:56:03.154125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.712 [2024-11-18 23:56:03.166086] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.712 [2024-11-18 23:56:03.166123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.712 [2024-11-18 23:56:03.178237] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.712 [2024-11-18 23:56:03.178299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.712 [2024-11-18 23:56:03.190095] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.712 [2024-11-18 23:56:03.190133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.712 [2024-11-18 23:56:03.202107] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.712 [2024-11-18 23:56:03.202288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.712 [2024-11-18 23:56:03.214115] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.712 [2024-11-18 23:56:03.214315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.712 [2024-11-18 23:56:03.226134] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.712 [2024-11-18 23:56:03.226318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.712 [2024-11-18 23:56:03.238118] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.712 [2024-11-18 23:56:03.238299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.712 [2024-11-18 23:56:03.250131] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.712 [2024-11-18 23:56:03.250352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.712 [2024-11-18 23:56:03.262116] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.712 [2024-11-18 23:56:03.262302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.712 [2024-11-18 23:56:03.274159] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.712 [2024-11-18 23:56:03.274346] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.712 [2024-11-18 23:56:03.286142] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.712 [2024-11-18 23:56:03.286326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.712 [2024-11-18 23:56:03.298137] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.712 [2024-11-18 23:56:03.298309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.712 [2024-11-18 23:56:03.310160] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.712 [2024-11-18 23:56:03.310366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.712 [2024-11-18 23:56:03.322199] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.712 [2024-11-18 23:56:03.322428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.712 [2024-11-18 23:56:03.334159] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.712 [2024-11-18 23:56:03.334359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.712 [2024-11-18 23:56:03.346194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.712 [2024-11-18 23:56:03.346407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.712 [2024-11-18 23:56:03.358209] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.712 [2024-11-18 23:56:03.358397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.712 [2024-11-18 23:56:03.370312] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.712 [2024-11-18 23:56:03.370647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.712 [2024-11-18 23:56:03.382299] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.712 [2024-11-18 23:56:03.382462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.712 [2024-11-18 23:56:03.394212] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.712 [2024-11-18 23:56:03.394398] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.972 [2024-11-18 23:56:03.406237] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.972 [2024-11-18 23:56:03.406421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.972 [2024-11-18 23:56:03.418249] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.972 [2024-11-18 23:56:03.418438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.972 [2024-11-18 23:56:03.430220] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.972 [2024-11-18 23:56:03.430255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.972 [2024-11-18 23:56:03.442241] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.972 [2024-11-18 23:56:03.442276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.972 [2024-11-18 23:56:03.454232] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.972 [2024-11-18 23:56:03.454425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.972 [2024-11-18 23:56:03.466241] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.972 [2024-11-18 23:56:03.466280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.972 [2024-11-18 23:56:03.478243] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.972 [2024-11-18 23:56:03.478281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.972 [2024-11-18 23:56:03.490231] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.972 [2024-11-18 23:56:03.490419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.972 [2024-11-18 23:56:03.502292] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.972 [2024-11-18 23:56:03.502354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.972 [2024-11-18 23:56:03.514287] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.972 [2024-11-18 23:56:03.514328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.972 [2024-11-18 23:56:03.526250] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.972 [2024-11-18 23:56:03.526287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.972 [2024-11-18 23:56:03.538275] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.972 [2024-11-18 23:56:03.538313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.972 [2024-11-18 23:56:03.550258] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.972 [2024-11-18 23:56:03.550314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.972 [2024-11-18 23:56:03.562285] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.972 [2024-11-18 23:56:03.562324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.972 [2024-11-18 23:56:03.574295] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.972 [2024-11-18 23:56:03.574333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.972 [2024-11-18 23:56:03.586280] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.972 [2024-11-18 23:56:03.586475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.972 [2024-11-18 23:56:03.598300] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.972 [2024-11-18 23:56:03.598338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.972 [2024-11-18 23:56:03.610348] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.972 [2024-11-18 23:56:03.610394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.972 [2024-11-18 23:56:03.622289] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.972 [2024-11-18 23:56:03.622497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.972 [2024-11-18 23:56:03.634317] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.972 [2024-11-18 23:56:03.634356] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.972 [2024-11-18 23:56:03.646298] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.972 [2024-11-18 23:56:03.646337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:56.972 [2024-11-18 23:56:03.658351] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:56.972 [2024-11-18 23:56:03.658405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.231 [2024-11-18 23:56:03.670338] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.231 [2024-11-18 23:56:03.670376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.231 [2024-11-18 23:56:03.682341] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.231 [2024-11-18 23:56:03.682389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.231 [2024-11-18 23:56:03.694327] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.231 [2024-11-18 23:56:03.694537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.231 [2024-11-18 23:56:03.706330] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.231 [2024-11-18 23:56:03.706368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.231 [2024-11-18 23:56:03.718317] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.231 [2024-11-18 23:56:03.718355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.231 [2024-11-18 23:56:03.730334] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.231 [2024-11-18 23:56:03.730372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.231 [2024-11-18 23:56:03.742321] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.231 [2024-11-18 23:56:03.742358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.231 [2024-11-18 23:56:03.754350] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.231 [2024-11-18 23:56:03.754558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.231 [2024-11-18 23:56:03.766353] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.231 [2024-11-18 23:56:03.766391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.231 [2024-11-18 23:56:03.778339] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.232 [2024-11-18 23:56:03.778377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.232 [2024-11-18 23:56:03.790345] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:57.232 [2024-11-18 23:56:03.790543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.232 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (67887) - No such process 00:11:57.232 23:56:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 67887 00:11:57.232 23:56:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:57.232 23:56:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.232 23:56:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:57.232 23:56:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.232 23:56:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:57.232 23:56:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.232 23:56:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:57.232 delay0 00:11:57.232 23:56:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.232 23:56:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:11:57.232 23:56:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.232 23:56:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:57.232 23:56:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.232 23:56:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 ns:1' 00:11:57.491 [2024-11-18 23:56:04.059238] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:12:04.054 Initializing NVMe Controllers 00:12:04.054 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:12:04.054 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:04.054 Initialization complete. Launching workers. 00:12:04.054 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 121 00:12:04.054 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 408, failed to submit 33 00:12:04.054 success 293, unsuccessful 115, failed 0 00:12:04.054 23:56:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:12:04.054 23:56:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:12:04.054 23:56:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:04.054 23:56:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:12:04.054 23:56:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:04.054 23:56:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:12:04.054 23:56:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:04.054 23:56:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:04.054 rmmod nvme_tcp 00:12:04.054 rmmod nvme_fabrics 00:12:04.054 rmmod nvme_keyring 00:12:04.054 23:56:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:04.055 23:56:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:12:04.055 23:56:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:12:04.055 23:56:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 67720 ']' 00:12:04.055 23:56:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 67720 00:12:04.055 23:56:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 67720 ']' 00:12:04.055 23:56:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 67720 00:12:04.055 23:56:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:12:04.055 23:56:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:04.055 23:56:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67720 00:12:04.055 killing process with pid 67720 00:12:04.055 23:56:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:04.055 23:56:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:04.055 23:56:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67720' 00:12:04.055 23:56:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 67720 00:12:04.055 23:56:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 67720 00:12:04.622 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:04.622 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:04.622 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:04.622 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:12:04.622 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:04.622 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:12:04.622 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:12:04.622 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:04.622 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:12:04.622 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:12:04.622 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:12:04.622 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:12:04.622 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:12:04.622 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:12:04.622 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:12:04.622 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:12:04.622 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:12:04.622 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:12:04.887 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:12:04.887 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:12:04.887 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:04.887 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:04.887 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@246 -- # remove_spdk_ns 00:12:04.887 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:04.887 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:04.887 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:04.887 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@300 -- # return 0 00:12:04.887 00:12:04.887 real 0m27.665s 00:12:04.887 user 0m45.342s 00:12:04.887 sys 0m6.889s 00:12:04.887 ************************************ 00:12:04.887 END TEST nvmf_zcopy 00:12:04.887 ************************************ 00:12:04.887 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:04.887 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:04.887 23:56:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:12:04.887 23:56:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:04.887 23:56:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:04.887 23:56:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:04.887 ************************************ 00:12:04.887 START TEST nvmf_nmic 00:12:04.887 ************************************ 00:12:04.887 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:12:04.887 * Looking for test storage... 00:12:04.887 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:04.887 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:05.179 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:12:05.179 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:05.179 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:05.179 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:05.179 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:05.179 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:05.179 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:12:05.179 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:12:05.179 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:12:05.179 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:12:05.179 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:12:05.179 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:12:05.179 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:12:05.179 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:05.179 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:12:05.179 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:12:05.179 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:05.179 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:05.179 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:12:05.179 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:12:05.179 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:05.179 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:12:05.179 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:12:05.179 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:12:05.179 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:12:05.179 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:05.179 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:12:05.179 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:12:05.179 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:05.179 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:05.179 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:12:05.179 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:05.179 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:05.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.179 --rc genhtml_branch_coverage=1 00:12:05.179 --rc genhtml_function_coverage=1 00:12:05.179 --rc genhtml_legend=1 00:12:05.179 --rc geninfo_all_blocks=1 00:12:05.179 --rc geninfo_unexecuted_blocks=1 00:12:05.179 00:12:05.179 ' 00:12:05.179 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:05.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.179 --rc genhtml_branch_coverage=1 00:12:05.179 --rc genhtml_function_coverage=1 00:12:05.179 --rc genhtml_legend=1 00:12:05.179 --rc geninfo_all_blocks=1 00:12:05.179 --rc geninfo_unexecuted_blocks=1 00:12:05.179 00:12:05.179 ' 00:12:05.179 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:05.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.179 --rc genhtml_branch_coverage=1 00:12:05.179 --rc genhtml_function_coverage=1 00:12:05.179 --rc genhtml_legend=1 00:12:05.179 --rc geninfo_all_blocks=1 00:12:05.179 --rc geninfo_unexecuted_blocks=1 00:12:05.179 00:12:05.179 ' 00:12:05.179 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:05.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.179 --rc genhtml_branch_coverage=1 00:12:05.179 --rc genhtml_function_coverage=1 00:12:05.179 --rc genhtml_legend=1 00:12:05.179 --rc geninfo_all_blocks=1 00:12:05.179 --rc geninfo_unexecuted_blocks=1 00:12:05.179 00:12:05.179 ' 00:12:05.179 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:05.179 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:12:05.179 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:05.179 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:05.179 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:05.179 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:05.179 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:05.179 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:05.179 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:05.179 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:05.179 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:05.179 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:05.179 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:12:05.179 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:12:05.179 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:05.179 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:05.179 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:05.179 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:05.179 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:05.179 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:12:05.179 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:05.179 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:05.179 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:05.179 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.179 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.179 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.180 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:12:05.180 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.180 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:12:05.180 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:05.180 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:05.180 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:05.180 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:05.180 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:05.180 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:05.180 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:05.180 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:05.180 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:05.180 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:05.180 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:05.180 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:05.180 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:12:05.180 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:05.180 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:05.180 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:05.180 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:05.180 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:05.180 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:05.180 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:05.180 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:05.180 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:12:05.180 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:12:05.180 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:12:05.180 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:12:05.180 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:12:05.180 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@460 -- # nvmf_veth_init 00:12:05.180 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:05.180 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:05.180 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:05.180 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:05.180 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:05.180 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:05.180 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:05.180 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:05.180 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:05.180 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:05.180 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:05.180 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:05.180 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:05.180 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:05.180 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:05.180 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:05.180 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:05.180 Cannot find device "nvmf_init_br" 00:12:05.180 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:12:05.180 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:05.180 Cannot find device "nvmf_init_br2" 00:12:05.180 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:12:05.180 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:05.180 Cannot find device "nvmf_tgt_br" 00:12:05.180 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # true 00:12:05.180 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:05.180 Cannot find device "nvmf_tgt_br2" 00:12:05.180 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # true 00:12:05.180 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:05.180 Cannot find device "nvmf_init_br" 00:12:05.180 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # true 00:12:05.180 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:05.180 Cannot find device "nvmf_init_br2" 00:12:05.180 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # true 00:12:05.180 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:05.180 Cannot find device "nvmf_tgt_br" 00:12:05.180 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # true 00:12:05.180 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:05.180 Cannot find device "nvmf_tgt_br2" 00:12:05.180 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # true 00:12:05.180 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:05.180 Cannot find device "nvmf_br" 00:12:05.180 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # true 00:12:05.180 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:05.180 Cannot find device "nvmf_init_if" 00:12:05.180 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # true 00:12:05.180 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:05.180 Cannot find device "nvmf_init_if2" 00:12:05.180 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # true 00:12:05.180 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:05.180 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:05.180 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # true 00:12:05.180 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:05.180 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:05.180 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # true 00:12:05.180 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:05.450 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:05.450 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:05.450 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:05.450 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:05.450 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:05.450 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:05.450 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:05.450 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:05.450 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:05.450 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:05.450 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:05.450 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:05.450 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:05.450 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:05.450 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:05.450 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:05.450 23:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:05.450 23:56:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:05.450 23:56:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:05.450 23:56:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:05.450 23:56:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:05.450 23:56:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:05.450 23:56:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:05.450 23:56:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:05.450 23:56:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:05.450 23:56:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:05.450 23:56:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:05.450 23:56:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:05.450 23:56:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:05.450 23:56:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:05.450 23:56:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:05.450 23:56:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:05.450 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:05.450 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:12:05.450 00:12:05.450 --- 10.0.0.3 ping statistics --- 00:12:05.450 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:05.450 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:12:05.450 23:56:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:05.450 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:05.450 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.041 ms 00:12:05.450 00:12:05.450 --- 10.0.0.4 ping statistics --- 00:12:05.450 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:05.450 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:12:05.450 23:56:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:05.450 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:05.450 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:12:05.450 00:12:05.450 --- 10.0.0.1 ping statistics --- 00:12:05.450 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:05.450 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:12:05.450 23:56:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:05.450 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:05.450 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.117 ms 00:12:05.450 00:12:05.450 --- 10.0.0.2 ping statistics --- 00:12:05.450 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:05.450 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:12:05.450 23:56:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:05.450 23:56:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@461 -- # return 0 00:12:05.450 23:56:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:05.450 23:56:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:05.450 23:56:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:05.450 23:56:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:05.450 23:56:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:05.450 23:56:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:05.451 23:56:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:05.709 23:56:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:12:05.709 23:56:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:05.709 23:56:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:05.709 23:56:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:05.709 23:56:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=68283 00:12:05.709 23:56:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:05.709 23:56:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 68283 00:12:05.709 23:56:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 68283 ']' 00:12:05.709 23:56:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:05.709 23:56:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:05.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:05.709 23:56:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:05.709 23:56:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:05.709 23:56:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:05.709 [2024-11-18 23:56:12.248075] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:12:05.710 [2024-11-18 23:56:12.248206] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:05.969 [2024-11-18 23:56:12.433057] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:05.969 [2024-11-18 23:56:12.562766] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:05.969 [2024-11-18 23:56:12.562830] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:05.969 [2024-11-18 23:56:12.562862] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:05.969 [2024-11-18 23:56:12.562877] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:05.969 [2024-11-18 23:56:12.562892] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:05.969 [2024-11-18 23:56:12.565054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:05.969 [2024-11-18 23:56:12.565313] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:05.969 [2024-11-18 23:56:12.565342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:05.969 [2024-11-18 23:56:12.566030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:06.229 [2024-11-18 23:56:12.755007] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:06.798 23:56:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:06.798 23:56:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:12:06.798 23:56:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:06.798 23:56:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:06.798 23:56:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:06.798 23:56:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:06.798 23:56:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:06.798 23:56:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.798 23:56:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:06.798 [2024-11-18 23:56:13.296766] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:06.798 23:56:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.798 23:56:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:06.798 23:56:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.798 23:56:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:06.798 Malloc0 00:12:06.798 23:56:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.798 23:56:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:06.798 23:56:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.798 23:56:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:06.798 23:56:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.798 23:56:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:06.798 23:56:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.798 23:56:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:06.798 23:56:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.798 23:56:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:12:06.798 23:56:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.798 23:56:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:06.798 [2024-11-18 23:56:13.419018] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:06.798 test case1: single bdev can't be used in multiple subsystems 00:12:06.798 23:56:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.798 23:56:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:12:06.798 23:56:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:12:06.798 23:56:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.798 23:56:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:06.798 23:56:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.798 23:56:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:12:06.798 23:56:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.798 23:56:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:06.798 23:56:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.798 23:56:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:12:06.798 23:56:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:12:06.798 23:56:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.798 23:56:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:06.798 [2024-11-18 23:56:13.446726] bdev.c:8180:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:12:06.798 [2024-11-18 23:56:13.446797] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:12:06.798 [2024-11-18 23:56:13.446835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.798 request: 00:12:06.798 { 00:12:06.798 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:06.798 "namespace": { 00:12:06.798 "bdev_name": "Malloc0", 00:12:06.798 "no_auto_visible": false 00:12:06.798 }, 00:12:06.798 "method": "nvmf_subsystem_add_ns", 00:12:06.798 "req_id": 1 00:12:06.798 } 00:12:06.798 Got JSON-RPC error response 00:12:06.798 response: 00:12:06.798 { 00:12:06.798 "code": -32602, 00:12:06.798 "message": "Invalid parameters" 00:12:06.798 } 00:12:06.798 23:56:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:06.798 23:56:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:12:06.798 23:56:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:12:06.798 Adding namespace failed - expected result. 00:12:06.798 23:56:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:12:06.798 test case2: host connect to nvmf target in multiple paths 00:12:06.798 23:56:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:12:06.798 23:56:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:12:06.798 23:56:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.798 23:56:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:06.798 [2024-11-18 23:56:13.458916] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:12:06.799 23:56:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.799 23:56:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --hostid=2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:12:07.058 23:56:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --hostid=2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4421 00:12:07.058 23:56:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:12:07.058 23:56:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:12:07.058 23:56:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:07.058 23:56:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:07.058 23:56:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:12:09.593 23:56:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:09.593 23:56:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:09.593 23:56:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:09.593 23:56:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:09.593 23:56:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:09.593 23:56:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:12:09.593 23:56:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:12:09.593 [global] 00:12:09.593 thread=1 00:12:09.593 invalidate=1 00:12:09.593 rw=write 00:12:09.593 time_based=1 00:12:09.593 runtime=1 00:12:09.593 ioengine=libaio 00:12:09.593 direct=1 00:12:09.593 bs=4096 00:12:09.593 iodepth=1 00:12:09.593 norandommap=0 00:12:09.593 numjobs=1 00:12:09.593 00:12:09.593 verify_dump=1 00:12:09.593 verify_backlog=512 00:12:09.593 verify_state_save=0 00:12:09.593 do_verify=1 00:12:09.593 verify=crc32c-intel 00:12:09.593 [job0] 00:12:09.593 filename=/dev/nvme0n1 00:12:09.593 Could not set queue depth (nvme0n1) 00:12:09.593 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:09.593 fio-3.35 00:12:09.593 Starting 1 thread 00:12:10.530 00:12:10.530 job0: (groupid=0, jobs=1): err= 0: pid=68374: Mon Nov 18 23:56:17 2024 00:12:10.530 read: IOPS=2504, BW=9.78MiB/s (10.3MB/s)(9.79MiB/1001msec) 00:12:10.530 slat (nsec): min=11264, max=44314, avg=14369.04, stdev=4026.32 00:12:10.530 clat (usec): min=172, max=932, avg=220.98, stdev=33.07 00:12:10.530 lat (usec): min=188, max=948, avg=235.35, stdev=33.62 00:12:10.530 clat percentiles (usec): 00:12:10.530 | 1.00th=[ 184], 5.00th=[ 190], 10.00th=[ 194], 20.00th=[ 200], 00:12:10.530 | 30.00th=[ 204], 40.00th=[ 210], 50.00th=[ 217], 60.00th=[ 225], 00:12:10.530 | 70.00th=[ 231], 80.00th=[ 239], 90.00th=[ 251], 95.00th=[ 262], 00:12:10.530 | 99.00th=[ 302], 99.50th=[ 367], 99.90th=[ 586], 99.95th=[ 881], 00:12:10.530 | 99.99th=[ 930] 00:12:10.530 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:12:10.530 slat (nsec): min=16557, max=79721, avg=21685.70, stdev=6251.40 00:12:10.530 clat (usec): min=80, max=326, avg=135.14, stdev=17.22 00:12:10.530 lat (usec): min=127, max=406, avg=156.83, stdev=18.64 00:12:10.530 clat percentiles (usec): 00:12:10.530 | 1.00th=[ 111], 5.00th=[ 114], 10.00th=[ 118], 20.00th=[ 122], 00:12:10.530 | 30.00th=[ 125], 40.00th=[ 128], 50.00th=[ 133], 60.00th=[ 137], 00:12:10.530 | 70.00th=[ 143], 80.00th=[ 149], 90.00th=[ 159], 95.00th=[ 167], 00:12:10.530 | 99.00th=[ 184], 99.50th=[ 190], 99.90th=[ 206], 99.95th=[ 289], 00:12:10.530 | 99.99th=[ 326] 00:12:10.530 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 00:12:10.530 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:12:10.530 lat (usec) : 100=0.04%, 250=94.99%, 500=4.89%, 750=0.04%, 1000=0.04% 00:12:10.530 cpu : usr=2.40%, sys=6.80%, ctx=5072, majf=0, minf=5 00:12:10.530 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:10.530 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:10.530 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:10.530 issued rwts: total=2507,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:10.530 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:10.530 00:12:10.530 Run status group 0 (all jobs): 00:12:10.530 READ: bw=9.78MiB/s (10.3MB/s), 9.78MiB/s-9.78MiB/s (10.3MB/s-10.3MB/s), io=9.79MiB (10.3MB), run=1001-1001msec 00:12:10.530 WRITE: bw=9.99MiB/s (10.5MB/s), 9.99MiB/s-9.99MiB/s (10.5MB/s-10.5MB/s), io=10.0MiB (10.5MB), run=1001-1001msec 00:12:10.530 00:12:10.530 Disk stats (read/write): 00:12:10.530 nvme0n1: ios=2112/2560, merge=0/0, ticks=509/388, in_queue=897, util=91.58% 00:12:10.530 23:56:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:10.531 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:12:10.531 23:56:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:10.531 23:56:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:12:10.531 23:56:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:10.531 23:56:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:10.531 23:56:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:10.531 23:56:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:10.531 23:56:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:12:10.531 23:56:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:12:10.531 23:56:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:12:10.531 23:56:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:10.531 23:56:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:12:10.531 23:56:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:10.531 23:56:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:12:10.531 23:56:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:10.531 23:56:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:10.531 rmmod nvme_tcp 00:12:10.790 rmmod nvme_fabrics 00:12:10.790 rmmod nvme_keyring 00:12:10.790 23:56:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:10.790 23:56:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:12:10.790 23:56:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:12:10.790 23:56:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 68283 ']' 00:12:10.790 23:56:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 68283 00:12:10.790 23:56:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 68283 ']' 00:12:10.790 23:56:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 68283 00:12:10.790 23:56:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:12:10.790 23:56:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:10.790 23:56:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68283 00:12:10.790 23:56:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:10.790 23:56:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:10.790 killing process with pid 68283 00:12:10.790 23:56:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68283' 00:12:10.790 23:56:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 68283 00:12:10.790 23:56:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 68283 00:12:11.728 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:11.728 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:11.728 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:11.728 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:12:11.728 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:12:11.728 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:11.728 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:12:11.728 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:11.728 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:12:11.728 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:12:11.728 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:12:11.728 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:12:11.728 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:12:11.987 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:12:11.987 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:12:11.987 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:12:11.987 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:12:11.987 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:12:11.987 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:12:11.987 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:12:11.987 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:11.987 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:11.987 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@246 -- # remove_spdk_ns 00:12:11.987 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:11.987 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:11.987 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:11.987 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@300 -- # return 0 00:12:11.987 00:12:11.987 real 0m7.159s 00:12:11.987 user 0m21.336s 00:12:11.987 sys 0m2.478s 00:12:11.987 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:11.987 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:11.987 ************************************ 00:12:11.987 END TEST nvmf_nmic 00:12:11.987 ************************************ 00:12:12.248 23:56:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:12:12.248 23:56:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:12.248 23:56:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:12.248 23:56:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:12.248 ************************************ 00:12:12.248 START TEST nvmf_fio_target 00:12:12.248 ************************************ 00:12:12.248 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:12:12.248 * Looking for test storage... 00:12:12.248 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:12.248 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:12.248 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:12.248 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:12:12.248 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:12.248 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:12.248 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:12.248 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:12.248 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:12:12.248 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:12:12.248 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:12:12.248 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:12:12.248 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:12:12.248 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:12:12.248 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:12:12.248 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:12.248 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:12:12.248 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:12:12.248 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:12.248 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:12.248 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:12:12.248 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:12:12.248 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:12.248 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:12:12.248 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:12:12.248 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:12:12.248 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:12:12.248 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:12.248 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:12:12.248 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:12:12.248 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:12.248 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:12.248 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:12:12.248 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:12.248 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:12.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:12.248 --rc genhtml_branch_coverage=1 00:12:12.248 --rc genhtml_function_coverage=1 00:12:12.248 --rc genhtml_legend=1 00:12:12.248 --rc geninfo_all_blocks=1 00:12:12.248 --rc geninfo_unexecuted_blocks=1 00:12:12.248 00:12:12.248 ' 00:12:12.248 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:12.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:12.248 --rc genhtml_branch_coverage=1 00:12:12.248 --rc genhtml_function_coverage=1 00:12:12.248 --rc genhtml_legend=1 00:12:12.248 --rc geninfo_all_blocks=1 00:12:12.248 --rc geninfo_unexecuted_blocks=1 00:12:12.248 00:12:12.248 ' 00:12:12.248 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:12.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:12.248 --rc genhtml_branch_coverage=1 00:12:12.248 --rc genhtml_function_coverage=1 00:12:12.248 --rc genhtml_legend=1 00:12:12.248 --rc geninfo_all_blocks=1 00:12:12.248 --rc geninfo_unexecuted_blocks=1 00:12:12.248 00:12:12.248 ' 00:12:12.248 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:12.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:12.248 --rc genhtml_branch_coverage=1 00:12:12.248 --rc genhtml_function_coverage=1 00:12:12.248 --rc genhtml_legend=1 00:12:12.248 --rc geninfo_all_blocks=1 00:12:12.248 --rc geninfo_unexecuted_blocks=1 00:12:12.248 00:12:12.248 ' 00:12:12.248 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:12.248 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:12:12.248 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:12.248 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:12.248 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:12.248 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:12.248 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:12.248 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:12.248 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:12.248 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:12.248 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:12.248 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:12.248 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:12:12.248 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:12:12.248 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:12.248 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:12.248 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:12.248 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:12.248 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:12.248 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:12:12.248 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:12.248 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:12.248 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:12.249 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.249 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.249 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.249 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:12:12.249 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.249 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:12:12.249 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:12.249 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:12.249 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:12.249 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:12.249 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:12.249 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:12.249 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:12.249 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:12.249 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:12.249 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:12.249 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:12.249 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:12.249 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:12.249 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:12:12.249 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:12.249 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:12.249 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:12.249 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:12.249 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:12.249 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:12.249 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:12.249 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:12.249 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:12:12.249 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:12:12.249 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:12:12.249 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:12:12.249 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:12:12.249 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:12:12.249 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:12.249 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:12.249 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:12.249 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:12.249 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:12.249 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:12.249 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:12.249 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:12.249 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:12.249 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:12.249 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:12.249 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:12.249 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:12.249 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:12.249 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:12.249 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:12.249 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:12.249 Cannot find device "nvmf_init_br" 00:12:12.249 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:12:12.249 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:12.508 Cannot find device "nvmf_init_br2" 00:12:12.508 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:12:12.508 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:12.508 Cannot find device "nvmf_tgt_br" 00:12:12.508 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # true 00:12:12.508 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:12.508 Cannot find device "nvmf_tgt_br2" 00:12:12.508 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # true 00:12:12.508 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:12.508 Cannot find device "nvmf_init_br" 00:12:12.508 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # true 00:12:12.508 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:12.508 Cannot find device "nvmf_init_br2" 00:12:12.508 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # true 00:12:12.508 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:12.508 Cannot find device "nvmf_tgt_br" 00:12:12.508 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # true 00:12:12.508 23:56:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:12.508 Cannot find device "nvmf_tgt_br2" 00:12:12.508 23:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # true 00:12:12.508 23:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:12.508 Cannot find device "nvmf_br" 00:12:12.508 23:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # true 00:12:12.508 23:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:12.508 Cannot find device "nvmf_init_if" 00:12:12.508 23:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # true 00:12:12.508 23:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:12.508 Cannot find device "nvmf_init_if2" 00:12:12.508 23:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # true 00:12:12.508 23:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:12.508 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:12.508 23:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # true 00:12:12.508 23:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:12.508 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:12.508 23:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # true 00:12:12.508 23:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:12.508 23:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:12.508 23:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:12.508 23:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:12.508 23:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:12.508 23:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:12.508 23:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:12.508 23:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:12.508 23:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:12.508 23:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:12.509 23:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:12.509 23:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:12.509 23:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:12.767 23:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:12.767 23:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:12.767 23:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:12.767 23:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:12.767 23:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:12.767 23:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:12.767 23:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:12.767 23:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:12.767 23:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:12.767 23:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:12.767 23:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:12.767 23:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:12.767 23:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:12.767 23:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:12.767 23:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:12.767 23:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:12.767 23:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:12.767 23:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:12.767 23:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:12.767 23:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:12.767 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:12.767 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:12:12.767 00:12:12.767 --- 10.0.0.3 ping statistics --- 00:12:12.767 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:12.767 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:12:12.767 23:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:12.767 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:12.767 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.056 ms 00:12:12.767 00:12:12.767 --- 10.0.0.4 ping statistics --- 00:12:12.767 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:12.767 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:12:12.767 23:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:12.767 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:12.767 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.052 ms 00:12:12.767 00:12:12.767 --- 10.0.0.1 ping statistics --- 00:12:12.767 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:12.767 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:12:12.767 23:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:12.767 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:12.767 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:12:12.767 00:12:12.767 --- 10.0.0.2 ping statistics --- 00:12:12.767 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:12.767 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:12:12.767 23:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:12.767 23:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@461 -- # return 0 00:12:12.767 23:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:12.767 23:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:12.767 23:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:12.767 23:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:12.767 23:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:12.767 23:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:12.767 23:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:12.767 23:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:12:12.767 23:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:12.767 23:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:12.767 23:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:12.767 23:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:12.767 23:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=68621 00:12:12.767 23:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 68621 00:12:12.767 23:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 68621 ']' 00:12:12.767 23:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:12.767 23:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:12.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:12.767 23:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:12.767 23:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:12.767 23:56:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.025 [2024-11-18 23:56:19.471448] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:12:13.025 [2024-11-18 23:56:19.472250] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:13.025 [2024-11-18 23:56:19.658887] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:13.283 [2024-11-18 23:56:19.758744] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:13.283 [2024-11-18 23:56:19.758813] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:13.283 [2024-11-18 23:56:19.758829] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:13.283 [2024-11-18 23:56:19.758840] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:13.283 [2024-11-18 23:56:19.758851] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:13.283 [2024-11-18 23:56:19.760673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:13.283 [2024-11-18 23:56:19.760796] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:13.283 [2024-11-18 23:56:19.760909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:13.284 [2024-11-18 23:56:19.760933] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:13.284 [2024-11-18 23:56:19.930331] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:13.850 23:56:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:13.850 23:56:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:12:13.850 23:56:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:13.850 23:56:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:13.850 23:56:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.850 23:56:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:13.850 23:56:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:14.109 [2024-11-18 23:56:20.646561] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:14.109 23:56:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:14.369 23:56:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:12:14.369 23:56:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:14.936 23:56:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:12:14.936 23:56:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:15.195 23:56:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:12:15.195 23:56:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:15.453 23:56:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:12:15.453 23:56:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:12:15.712 23:56:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:15.970 23:56:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:12:15.970 23:56:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:16.229 23:56:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:12:16.488 23:56:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:16.748 23:56:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:12:16.748 23:56:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:12:17.007 23:56:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:17.265 23:56:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:12:17.265 23:56:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:17.524 23:56:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:12:17.524 23:56:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:17.783 23:56:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:12:18.041 [2024-11-18 23:56:24.521513] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:18.041 23:56:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:12:18.300 23:56:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:12:18.559 23:56:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --hostid=2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:12:18.559 23:56:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:12:18.559 23:56:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:12:18.559 23:56:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:18.559 23:56:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:12:18.559 23:56:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:12:18.559 23:56:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:12:21.095 23:56:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:21.096 23:56:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:21.096 23:56:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:21.096 23:56:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:12:21.096 23:56:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:21.096 23:56:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:12:21.096 23:56:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:12:21.096 [global] 00:12:21.096 thread=1 00:12:21.096 invalidate=1 00:12:21.096 rw=write 00:12:21.096 time_based=1 00:12:21.096 runtime=1 00:12:21.096 ioengine=libaio 00:12:21.096 direct=1 00:12:21.096 bs=4096 00:12:21.096 iodepth=1 00:12:21.096 norandommap=0 00:12:21.096 numjobs=1 00:12:21.096 00:12:21.096 verify_dump=1 00:12:21.096 verify_backlog=512 00:12:21.096 verify_state_save=0 00:12:21.096 do_verify=1 00:12:21.096 verify=crc32c-intel 00:12:21.096 [job0] 00:12:21.096 filename=/dev/nvme0n1 00:12:21.096 [job1] 00:12:21.096 filename=/dev/nvme0n2 00:12:21.096 [job2] 00:12:21.096 filename=/dev/nvme0n3 00:12:21.096 [job3] 00:12:21.096 filename=/dev/nvme0n4 00:12:21.096 Could not set queue depth (nvme0n1) 00:12:21.096 Could not set queue depth (nvme0n2) 00:12:21.096 Could not set queue depth (nvme0n3) 00:12:21.096 Could not set queue depth (nvme0n4) 00:12:21.096 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:21.096 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:21.096 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:21.096 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:21.096 fio-3.35 00:12:21.096 Starting 4 threads 00:12:22.030 00:12:22.030 job0: (groupid=0, jobs=1): err= 0: pid=68811: Mon Nov 18 23:56:28 2024 00:12:22.030 read: IOPS=2517, BW=9.83MiB/s (10.3MB/s)(9.84MiB/1001msec) 00:12:22.030 slat (nsec): min=11331, max=97393, avg=14384.19, stdev=4391.61 00:12:22.030 clat (usec): min=163, max=336, avg=198.93, stdev=21.43 00:12:22.030 lat (usec): min=175, max=351, avg=213.32, stdev=22.28 00:12:22.030 clat percentiles (usec): 00:12:22.030 | 1.00th=[ 167], 5.00th=[ 172], 10.00th=[ 174], 20.00th=[ 180], 00:12:22.030 | 30.00th=[ 186], 40.00th=[ 192], 50.00th=[ 198], 60.00th=[ 202], 00:12:22.030 | 70.00th=[ 208], 80.00th=[ 217], 90.00th=[ 227], 95.00th=[ 237], 00:12:22.030 | 99.00th=[ 269], 99.50th=[ 285], 99.90th=[ 314], 99.95th=[ 314], 00:12:22.030 | 99.99th=[ 338] 00:12:22.030 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:12:22.030 slat (usec): min=14, max=105, avg=22.08, stdev= 6.03 00:12:22.030 clat (usec): min=114, max=2163, avg=155.09, stdev=44.48 00:12:22.030 lat (usec): min=133, max=2182, avg=177.17, stdev=45.03 00:12:22.030 clat percentiles (usec): 00:12:22.030 | 1.00th=[ 125], 5.00th=[ 133], 10.00th=[ 137], 20.00th=[ 141], 00:12:22.030 | 30.00th=[ 145], 40.00th=[ 147], 50.00th=[ 151], 60.00th=[ 155], 00:12:22.030 | 70.00th=[ 161], 80.00th=[ 167], 90.00th=[ 178], 95.00th=[ 186], 00:12:22.030 | 99.00th=[ 208], 99.50th=[ 231], 99.90th=[ 375], 99.95th=[ 611], 00:12:22.030 | 99.99th=[ 2180] 00:12:22.030 bw ( KiB/s): min=12072, max=12072, per=36.45%, avg=12072.00, stdev= 0.00, samples=1 00:12:22.030 iops : min= 3018, max= 3018, avg=3018.00, stdev= 0.00, samples=1 00:12:22.030 lat (usec) : 250=98.90%, 500=1.06%, 750=0.02% 00:12:22.030 lat (msec) : 4=0.02% 00:12:22.030 cpu : usr=1.90%, sys=7.20%, ctx=5085, majf=0, minf=3 00:12:22.030 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:22.030 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:22.030 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:22.030 issued rwts: total=2520,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:22.030 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:22.030 job1: (groupid=0, jobs=1): err= 0: pid=68812: Mon Nov 18 23:56:28 2024 00:12:22.030 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:12:22.030 slat (nsec): min=8392, max=65112, avg=17302.85, stdev=6958.93 00:12:22.030 clat (usec): min=219, max=1089, avg=330.70, stdev=48.05 00:12:22.030 lat (usec): min=240, max=1103, avg=348.01, stdev=50.01 00:12:22.030 clat percentiles (usec): 00:12:22.030 | 1.00th=[ 260], 5.00th=[ 273], 10.00th=[ 285], 20.00th=[ 297], 00:12:22.030 | 30.00th=[ 310], 40.00th=[ 318], 50.00th=[ 326], 60.00th=[ 334], 00:12:22.030 | 70.00th=[ 343], 80.00th=[ 355], 90.00th=[ 379], 95.00th=[ 416], 00:12:22.030 | 99.00th=[ 461], 99.50th=[ 482], 99.90th=[ 938], 99.95th=[ 1090], 00:12:22.030 | 99.99th=[ 1090] 00:12:22.030 write: IOPS=1629, BW=6517KiB/s (6674kB/s)(6524KiB/1001msec); 0 zone resets 00:12:22.030 slat (nsec): min=11282, max=95727, avg=24949.96, stdev=10790.63 00:12:22.030 clat (usec): min=123, max=7639, avg=256.59, stdev=203.90 00:12:22.030 lat (usec): min=148, max=7664, avg=281.54, stdev=204.53 00:12:22.030 clat percentiles (usec): 00:12:22.030 | 1.00th=[ 141], 5.00th=[ 155], 10.00th=[ 176], 20.00th=[ 225], 00:12:22.030 | 30.00th=[ 239], 40.00th=[ 249], 50.00th=[ 255], 60.00th=[ 265], 00:12:22.030 | 70.00th=[ 269], 80.00th=[ 277], 90.00th=[ 289], 95.00th=[ 302], 00:12:22.030 | 99.00th=[ 400], 99.50th=[ 553], 99.90th=[ 2343], 99.95th=[ 7635], 00:12:22.030 | 99.99th=[ 7635] 00:12:22.030 bw ( KiB/s): min= 8192, max= 8192, per=24.74%, avg=8192.00, stdev= 0.00, samples=1 00:12:22.030 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:12:22.030 lat (usec) : 250=21.88%, 500=77.71%, 750=0.19%, 1000=0.06% 00:12:22.030 lat (msec) : 2=0.06%, 4=0.06%, 10=0.03% 00:12:22.030 cpu : usr=1.50%, sys=5.60%, ctx=3168, majf=0, minf=10 00:12:22.030 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:22.030 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:22.030 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:22.030 issued rwts: total=1536,1631,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:22.030 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:22.030 job2: (groupid=0, jobs=1): err= 0: pid=68814: Mon Nov 18 23:56:28 2024 00:12:22.030 read: IOPS=1889, BW=7556KiB/s (7738kB/s)(7564KiB/1001msec) 00:12:22.030 slat (nsec): min=12106, max=75091, avg=18255.44, stdev=6186.18 00:12:22.030 clat (usec): min=181, max=623, avg=252.56, stdev=63.70 00:12:22.030 lat (usec): min=196, max=646, avg=270.82, stdev=67.59 00:12:22.030 clat percentiles (usec): 00:12:22.030 | 1.00th=[ 186], 5.00th=[ 192], 10.00th=[ 196], 20.00th=[ 204], 00:12:22.030 | 30.00th=[ 210], 40.00th=[ 219], 50.00th=[ 227], 60.00th=[ 235], 00:12:22.030 | 70.00th=[ 251], 80.00th=[ 334], 90.00th=[ 351], 95.00th=[ 363], 00:12:22.030 | 99.00th=[ 412], 99.50th=[ 474], 99.90th=[ 562], 99.95th=[ 627], 00:12:22.030 | 99.99th=[ 627] 00:12:22.030 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:12:22.030 slat (nsec): min=15272, max=94517, avg=28290.05, stdev=9512.79 00:12:22.030 clat (usec): min=126, max=1027, avg=205.76, stdev=73.72 00:12:22.030 lat (usec): min=144, max=1083, avg=234.05, stdev=80.20 00:12:22.030 clat percentiles (usec): 00:12:22.030 | 1.00th=[ 131], 5.00th=[ 137], 10.00th=[ 143], 20.00th=[ 155], 00:12:22.030 | 30.00th=[ 161], 40.00th=[ 167], 50.00th=[ 176], 60.00th=[ 188], 00:12:22.030 | 70.00th=[ 247], 80.00th=[ 265], 90.00th=[ 281], 95.00th=[ 334], 00:12:22.030 | 99.00th=[ 457], 99.50th=[ 490], 99.90th=[ 586], 99.95th=[ 611], 00:12:22.030 | 99.99th=[ 1029] 00:12:22.030 bw ( KiB/s): min= 8192, max= 8192, per=24.74%, avg=8192.00, stdev= 0.00, samples=1 00:12:22.030 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:12:22.030 lat (usec) : 250=70.65%, 500=28.97%, 750=0.36% 00:12:22.030 lat (msec) : 2=0.03% 00:12:22.030 cpu : usr=1.80%, sys=7.40%, ctx=3940, majf=0, minf=13 00:12:22.030 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:22.030 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:22.030 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:22.030 issued rwts: total=1891,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:22.030 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:22.030 job3: (groupid=0, jobs=1): err= 0: pid=68815: Mon Nov 18 23:56:28 2024 00:12:22.030 read: IOPS=1917, BW=7668KiB/s (7852kB/s)(7676KiB/1001msec) 00:12:22.030 slat (nsec): min=8432, max=49201, avg=14455.19, stdev=4201.22 00:12:22.030 clat (usec): min=179, max=1687, avg=262.86, stdev=70.41 00:12:22.030 lat (usec): min=192, max=1700, avg=277.31, stdev=70.17 00:12:22.030 clat percentiles (usec): 00:12:22.030 | 1.00th=[ 186], 5.00th=[ 192], 10.00th=[ 196], 20.00th=[ 204], 00:12:22.031 | 30.00th=[ 210], 40.00th=[ 221], 50.00th=[ 239], 60.00th=[ 285], 00:12:22.031 | 70.00th=[ 302], 80.00th=[ 318], 90.00th=[ 338], 95.00th=[ 375], 00:12:22.031 | 99.00th=[ 424], 99.50th=[ 437], 99.90th=[ 824], 99.95th=[ 1680], 00:12:22.031 | 99.99th=[ 1680] 00:12:22.031 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:12:22.031 slat (nsec): min=13070, max=95050, avg=22436.33, stdev=5460.79 00:12:22.031 clat (usec): min=126, max=1071, avg=202.75, stdev=53.78 00:12:22.031 lat (usec): min=146, max=1091, avg=225.19, stdev=53.15 00:12:22.031 clat percentiles (usec): 00:12:22.031 | 1.00th=[ 135], 5.00th=[ 145], 10.00th=[ 151], 20.00th=[ 157], 00:12:22.031 | 30.00th=[ 163], 40.00th=[ 172], 50.00th=[ 182], 60.00th=[ 212], 00:12:22.031 | 70.00th=[ 235], 80.00th=[ 255], 90.00th=[ 277], 95.00th=[ 293], 00:12:22.031 | 99.00th=[ 322], 99.50th=[ 326], 99.90th=[ 351], 99.95th=[ 392], 00:12:22.031 | 99.99th=[ 1074] 00:12:22.031 bw ( KiB/s): min=10312, max=10312, per=31.14%, avg=10312.00, stdev= 0.00, samples=1 00:12:22.031 iops : min= 2578, max= 2578, avg=2578.00, stdev= 0.00, samples=1 00:12:22.031 lat (usec) : 250=65.16%, 500=34.76%, 1000=0.03% 00:12:22.031 lat (msec) : 2=0.05% 00:12:22.031 cpu : usr=1.90%, sys=5.80%, ctx=3968, majf=0, minf=13 00:12:22.031 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:22.031 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:22.031 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:22.031 issued rwts: total=1919,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:22.031 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:22.031 00:12:22.031 Run status group 0 (all jobs): 00:12:22.031 READ: bw=30.7MiB/s (32.2MB/s), 6138KiB/s-9.83MiB/s (6285kB/s-10.3MB/s), io=30.7MiB (32.2MB), run=1001-1001msec 00:12:22.031 WRITE: bw=32.3MiB/s (33.9MB/s), 6517KiB/s-9.99MiB/s (6674kB/s-10.5MB/s), io=32.4MiB (33.9MB), run=1001-1001msec 00:12:22.031 00:12:22.031 Disk stats (read/write): 00:12:22.031 nvme0n1: ios=2098/2364, merge=0/0, ticks=424/379, in_queue=803, util=87.89% 00:12:22.031 nvme0n2: ios=1237/1536, merge=0/0, ticks=449/382, in_queue=831, util=89.17% 00:12:22.031 nvme0n3: ios=1536/1749, merge=0/0, ticks=401/391, in_queue=792, util=89.27% 00:12:22.031 nvme0n4: ios=1536/2019, merge=0/0, ticks=377/413, in_queue=790, util=89.82% 00:12:22.031 23:56:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:12:22.031 [global] 00:12:22.031 thread=1 00:12:22.031 invalidate=1 00:12:22.031 rw=randwrite 00:12:22.031 time_based=1 00:12:22.031 runtime=1 00:12:22.031 ioengine=libaio 00:12:22.031 direct=1 00:12:22.031 bs=4096 00:12:22.031 iodepth=1 00:12:22.031 norandommap=0 00:12:22.031 numjobs=1 00:12:22.031 00:12:22.031 verify_dump=1 00:12:22.031 verify_backlog=512 00:12:22.031 verify_state_save=0 00:12:22.031 do_verify=1 00:12:22.031 verify=crc32c-intel 00:12:22.031 [job0] 00:12:22.031 filename=/dev/nvme0n1 00:12:22.031 [job1] 00:12:22.031 filename=/dev/nvme0n2 00:12:22.031 [job2] 00:12:22.031 filename=/dev/nvme0n3 00:12:22.031 [job3] 00:12:22.031 filename=/dev/nvme0n4 00:12:22.031 Could not set queue depth (nvme0n1) 00:12:22.031 Could not set queue depth (nvme0n2) 00:12:22.031 Could not set queue depth (nvme0n3) 00:12:22.031 Could not set queue depth (nvme0n4) 00:12:22.289 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:22.289 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:22.289 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:22.289 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:22.289 fio-3.35 00:12:22.289 Starting 4 threads 00:12:23.665 00:12:23.665 job0: (groupid=0, jobs=1): err= 0: pid=68868: Mon Nov 18 23:56:29 2024 00:12:23.665 read: IOPS=1448, BW=5794KiB/s (5933kB/s)(5800KiB/1001msec) 00:12:23.665 slat (nsec): min=8778, max=89468, avg=15539.88, stdev=8056.74 00:12:23.665 clat (usec): min=194, max=3947, avg=346.58, stdev=110.97 00:12:23.665 lat (usec): min=213, max=3974, avg=362.12, stdev=112.27 00:12:23.665 clat percentiles (usec): 00:12:23.665 | 1.00th=[ 265], 5.00th=[ 281], 10.00th=[ 285], 20.00th=[ 297], 00:12:23.665 | 30.00th=[ 306], 40.00th=[ 318], 50.00th=[ 330], 60.00th=[ 343], 00:12:23.665 | 70.00th=[ 363], 80.00th=[ 392], 90.00th=[ 424], 95.00th=[ 453], 00:12:23.665 | 99.00th=[ 523], 99.50th=[ 553], 99.90th=[ 807], 99.95th=[ 3949], 00:12:23.665 | 99.99th=[ 3949] 00:12:23.665 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:12:23.665 slat (usec): min=10, max=115, avg=27.34, stdev= 9.05 00:12:23.665 clat (usec): min=137, max=7593, avg=277.76, stdev=331.24 00:12:23.665 lat (usec): min=160, max=7643, avg=305.10, stdev=332.09 00:12:23.665 clat percentiles (usec): 00:12:23.665 | 1.00th=[ 145], 5.00th=[ 165], 10.00th=[ 217], 20.00th=[ 235], 00:12:23.665 | 30.00th=[ 245], 40.00th=[ 253], 50.00th=[ 265], 60.00th=[ 269], 00:12:23.665 | 70.00th=[ 277], 80.00th=[ 289], 90.00th=[ 310], 95.00th=[ 338], 00:12:23.665 | 99.00th=[ 412], 99.50th=[ 478], 99.90th=[ 7308], 99.95th=[ 7570], 00:12:23.665 | 99.99th=[ 7570] 00:12:23.665 bw ( KiB/s): min= 7088, max= 7088, per=23.10%, avg=7088.00, stdev= 0.00, samples=1 00:12:23.665 iops : min= 1772, max= 1772, avg=1772.00, stdev= 0.00, samples=1 00:12:23.665 lat (usec) : 250=18.99%, 500=80.07%, 750=0.70%, 1000=0.07% 00:12:23.665 lat (msec) : 4=0.07%, 10=0.10% 00:12:23.665 cpu : usr=1.40%, sys=5.50%, ctx=3017, majf=0, minf=15 00:12:23.665 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:23.665 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:23.665 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:23.665 issued rwts: total=1450,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:23.665 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:23.665 job1: (groupid=0, jobs=1): err= 0: pid=68869: Mon Nov 18 23:56:29 2024 00:12:23.665 read: IOPS=2038, BW=8156KiB/s (8352kB/s)(8164KiB/1001msec) 00:12:23.665 slat (nsec): min=10278, max=55379, avg=16456.41, stdev=4324.15 00:12:23.665 clat (usec): min=164, max=566, avg=244.07, stdev=84.37 00:12:23.665 lat (usec): min=177, max=582, avg=260.52, stdev=84.68 00:12:23.665 clat percentiles (usec): 00:12:23.665 | 1.00th=[ 172], 5.00th=[ 176], 10.00th=[ 180], 20.00th=[ 184], 00:12:23.665 | 30.00th=[ 192], 40.00th=[ 198], 50.00th=[ 204], 60.00th=[ 215], 00:12:23.665 | 70.00th=[ 231], 80.00th=[ 334], 90.00th=[ 396], 95.00th=[ 420], 00:12:23.665 | 99.00th=[ 469], 99.50th=[ 498], 99.90th=[ 529], 99.95th=[ 553], 00:12:23.665 | 99.99th=[ 570] 00:12:23.665 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:12:23.665 slat (usec): min=10, max=190, avg=25.20, stdev=12.15 00:12:23.665 clat (usec): min=117, max=583, avg=199.21, stdev=73.04 00:12:23.665 lat (usec): min=143, max=620, avg=224.41, stdev=76.27 00:12:23.665 clat percentiles (usec): 00:12:23.665 | 1.00th=[ 127], 5.00th=[ 133], 10.00th=[ 137], 20.00th=[ 141], 00:12:23.665 | 30.00th=[ 147], 40.00th=[ 153], 50.00th=[ 161], 60.00th=[ 180], 00:12:23.665 | 70.00th=[ 243], 80.00th=[ 273], 90.00th=[ 302], 95.00th=[ 330], 00:12:23.665 | 99.00th=[ 429], 99.50th=[ 486], 99.90th=[ 537], 99.95th=[ 553], 00:12:23.665 | 99.99th=[ 586] 00:12:23.665 bw ( KiB/s): min= 8192, max= 8192, per=26.69%, avg=8192.00, stdev= 0.00, samples=1 00:12:23.665 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:12:23.665 lat (usec) : 250=73.29%, 500=26.34%, 750=0.37% 00:12:23.665 cpu : usr=1.70%, sys=6.90%, ctx=4167, majf=0, minf=11 00:12:23.665 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:23.665 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:23.665 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:23.665 issued rwts: total=2041,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:23.665 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:23.665 job2: (groupid=0, jobs=1): err= 0: pid=68870: Mon Nov 18 23:56:29 2024 00:12:23.665 read: IOPS=2069, BW=8280KiB/s (8478kB/s)(8288KiB/1001msec) 00:12:23.665 slat (nsec): min=11401, max=50638, avg=14173.41, stdev=4814.35 00:12:23.665 clat (usec): min=173, max=771, avg=217.21, stdev=48.16 00:12:23.665 lat (usec): min=185, max=793, avg=231.38, stdev=50.06 00:12:23.665 clat percentiles (usec): 00:12:23.665 | 1.00th=[ 180], 5.00th=[ 184], 10.00th=[ 188], 20.00th=[ 192], 00:12:23.665 | 30.00th=[ 196], 40.00th=[ 200], 50.00th=[ 206], 60.00th=[ 212], 00:12:23.665 | 70.00th=[ 221], 80.00th=[ 229], 90.00th=[ 243], 95.00th=[ 293], 00:12:23.665 | 99.00th=[ 457], 99.50th=[ 506], 99.90th=[ 635], 99.95th=[ 668], 00:12:23.665 | 99.99th=[ 775] 00:12:23.665 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:12:23.665 slat (usec): min=14, max=112, avg=23.56, stdev= 9.50 00:12:23.665 clat (usec): min=120, max=2142, avg=176.38, stdev=77.47 00:12:23.665 lat (usec): min=138, max=2164, avg=199.93, stdev=83.68 00:12:23.665 clat percentiles (usec): 00:12:23.665 | 1.00th=[ 125], 5.00th=[ 130], 10.00th=[ 133], 20.00th=[ 137], 00:12:23.665 | 30.00th=[ 141], 40.00th=[ 147], 50.00th=[ 153], 60.00th=[ 161], 00:12:23.665 | 70.00th=[ 169], 80.00th=[ 186], 90.00th=[ 262], 95.00th=[ 297], 00:12:23.665 | 99.00th=[ 453], 99.50th=[ 461], 99.90th=[ 594], 99.95th=[ 603], 00:12:23.665 | 99.99th=[ 2147] 00:12:23.665 bw ( KiB/s): min= 8368, max= 8368, per=27.27%, avg=8368.00, stdev= 0.00, samples=1 00:12:23.665 iops : min= 2092, max= 2092, avg=2092.00, stdev= 0.00, samples=1 00:12:23.665 lat (usec) : 250=89.94%, 500=9.72%, 750=0.30%, 1000=0.02% 00:12:23.665 lat (msec) : 4=0.02% 00:12:23.666 cpu : usr=2.00%, sys=7.10%, ctx=4633, majf=0, minf=15 00:12:23.666 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:23.666 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:23.666 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:23.666 issued rwts: total=2072,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:23.666 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:23.666 job3: (groupid=0, jobs=1): err= 0: pid=68871: Mon Nov 18 23:56:29 2024 00:12:23.666 read: IOPS=1520, BW=6082KiB/s (6228kB/s)(6088KiB/1001msec) 00:12:23.666 slat (nsec): min=8578, max=63503, avg=15686.98, stdev=6105.00 00:12:23.666 clat (usec): min=253, max=577, avg=335.76, stdev=46.11 00:12:23.666 lat (usec): min=270, max=590, avg=351.45, stdev=48.22 00:12:23.666 clat percentiles (usec): 00:12:23.666 | 1.00th=[ 269], 5.00th=[ 281], 10.00th=[ 289], 20.00th=[ 297], 00:12:23.666 | 30.00th=[ 306], 40.00th=[ 314], 50.00th=[ 326], 60.00th=[ 334], 00:12:23.666 | 70.00th=[ 351], 80.00th=[ 379], 90.00th=[ 404], 95.00th=[ 424], 00:12:23.666 | 99.00th=[ 461], 99.50th=[ 478], 99.90th=[ 529], 99.95th=[ 578], 00:12:23.666 | 99.99th=[ 578] 00:12:23.666 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:12:23.666 slat (usec): min=6, max=194, avg=24.48, stdev=14.79 00:12:23.666 clat (usec): min=134, max=550, avg=274.49, stdev=46.26 00:12:23.666 lat (usec): min=174, max=661, avg=298.98, stdev=51.02 00:12:23.666 clat percentiles (usec): 00:12:23.666 | 1.00th=[ 206], 5.00th=[ 227], 10.00th=[ 235], 20.00th=[ 243], 00:12:23.666 | 30.00th=[ 249], 40.00th=[ 258], 50.00th=[ 265], 60.00th=[ 273], 00:12:23.666 | 70.00th=[ 285], 80.00th=[ 297], 90.00th=[ 326], 95.00th=[ 363], 00:12:23.666 | 99.00th=[ 469], 99.50th=[ 502], 99.90th=[ 523], 99.95th=[ 553], 00:12:23.666 | 99.99th=[ 553] 00:12:23.666 bw ( KiB/s): min= 7720, max= 7720, per=25.16%, avg=7720.00, stdev= 0.00, samples=1 00:12:23.666 iops : min= 1930, max= 1930, avg=1930.00, stdev= 0.00, samples=1 00:12:23.666 lat (usec) : 250=15.34%, 500=84.30%, 750=0.36% 00:12:23.666 cpu : usr=1.80%, sys=4.70%, ctx=3140, majf=0, minf=5 00:12:23.666 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:23.666 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:23.666 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:23.666 issued rwts: total=1522,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:23.666 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:23.666 00:12:23.666 Run status group 0 (all jobs): 00:12:23.666 READ: bw=27.6MiB/s (29.0MB/s), 5794KiB/s-8280KiB/s (5933kB/s-8478kB/s), io=27.7MiB (29.0MB), run=1001-1001msec 00:12:23.666 WRITE: bw=30.0MiB/s (31.4MB/s), 6138KiB/s-9.99MiB/s (6285kB/s-10.5MB/s), io=30.0MiB (31.5MB), run=1001-1001msec 00:12:23.666 00:12:23.666 Disk stats (read/write): 00:12:23.666 nvme0n1: ios=1118/1536, merge=0/0, ticks=399/403, in_queue=802, util=87.88% 00:12:23.666 nvme0n2: ios=1584/1931, merge=0/0, ticks=453/403, in_queue=856, util=90.41% 00:12:23.666 nvme0n3: ios=1908/2048, merge=0/0, ticks=446/410, in_queue=856, util=89.86% 00:12:23.666 nvme0n4: ios=1146/1536, merge=0/0, ticks=386/393, in_queue=779, util=90.02% 00:12:23.666 23:56:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:12:23.666 [global] 00:12:23.666 thread=1 00:12:23.666 invalidate=1 00:12:23.666 rw=write 00:12:23.666 time_based=1 00:12:23.666 runtime=1 00:12:23.666 ioengine=libaio 00:12:23.666 direct=1 00:12:23.666 bs=4096 00:12:23.666 iodepth=128 00:12:23.666 norandommap=0 00:12:23.666 numjobs=1 00:12:23.666 00:12:23.666 verify_dump=1 00:12:23.666 verify_backlog=512 00:12:23.666 verify_state_save=0 00:12:23.666 do_verify=1 00:12:23.666 verify=crc32c-intel 00:12:23.666 [job0] 00:12:23.666 filename=/dev/nvme0n1 00:12:23.666 [job1] 00:12:23.666 filename=/dev/nvme0n2 00:12:23.666 [job2] 00:12:23.666 filename=/dev/nvme0n3 00:12:23.666 [job3] 00:12:23.666 filename=/dev/nvme0n4 00:12:23.666 Could not set queue depth (nvme0n1) 00:12:23.666 Could not set queue depth (nvme0n2) 00:12:23.666 Could not set queue depth (nvme0n3) 00:12:23.666 Could not set queue depth (nvme0n4) 00:12:23.666 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:23.666 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:23.666 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:23.666 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:23.666 fio-3.35 00:12:23.666 Starting 4 threads 00:12:25.045 00:12:25.045 job0: (groupid=0, jobs=1): err= 0: pid=68927: Mon Nov 18 23:56:31 2024 00:12:25.045 read: IOPS=1522, BW=6089KiB/s (6235kB/s)(6144KiB/1009msec) 00:12:25.045 slat (usec): min=8, max=6585, avg=307.34, stdev=977.93 00:12:25.045 clat (usec): min=26656, max=54710, avg=39014.32, stdev=5574.06 00:12:25.045 lat (usec): min=26677, max=54743, avg=39321.66, stdev=5543.55 00:12:25.045 clat percentiles (usec): 00:12:25.045 | 1.00th=[29492], 5.00th=[31065], 10.00th=[32637], 20.00th=[33424], 00:12:25.045 | 30.00th=[34866], 40.00th=[36439], 50.00th=[38536], 60.00th=[40633], 00:12:25.045 | 70.00th=[43254], 80.00th=[45351], 90.00th=[46400], 95.00th=[46924], 00:12:25.045 | 99.00th=[50594], 99.50th=[53740], 99.90th=[54789], 99.95th=[54789], 00:12:25.045 | 99.99th=[54789] 00:12:25.045 write: IOPS=1683, BW=6735KiB/s (6897kB/s)(6796KiB/1009msec); 0 zone resets 00:12:25.045 slat (usec): min=15, max=8755, avg=301.12, stdev=1073.78 00:12:25.045 clat (usec): min=8194, max=77252, avg=38963.27, stdev=13165.18 00:12:25.045 lat (usec): min=8223, max=77325, avg=39264.39, stdev=13205.34 00:12:25.045 clat percentiles (usec): 00:12:25.045 | 1.00th=[17171], 5.00th=[24511], 10.00th=[25822], 20.00th=[26870], 00:12:25.045 | 30.00th=[31589], 40.00th=[34866], 50.00th=[36439], 60.00th=[37487], 00:12:25.045 | 70.00th=[43779], 80.00th=[48497], 90.00th=[56886], 95.00th=[68682], 00:12:25.045 | 99.00th=[77071], 99.50th=[77071], 99.90th=[77071], 99.95th=[77071], 00:12:25.045 | 99.99th=[77071] 00:12:25.045 bw ( KiB/s): min= 4357, max= 8192, per=14.45%, avg=6274.50, stdev=2711.75, samples=2 00:12:25.045 iops : min= 1089, max= 2048, avg=1568.50, stdev=678.12, samples=2 00:12:25.045 lat (msec) : 10=0.09%, 20=0.96%, 50=90.36%, 100=8.59% 00:12:25.045 cpu : usr=2.18%, sys=5.56%, ctx=431, majf=0, minf=13 00:12:25.045 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:12:25.045 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:25.045 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:25.045 issued rwts: total=1536,1699,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:25.045 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:25.045 job1: (groupid=0, jobs=1): err= 0: pid=68929: Mon Nov 18 23:56:31 2024 00:12:25.045 read: IOPS=2031, BW=8127KiB/s (8322kB/s)(8192KiB/1008msec) 00:12:25.045 slat (usec): min=6, max=8744, avg=182.65, stdev=798.01 00:12:25.045 clat (usec): min=13093, max=37944, avg=22333.78, stdev=4359.36 00:12:25.045 lat (usec): min=13132, max=37963, avg=22516.43, stdev=4433.04 00:12:25.045 clat percentiles (usec): 00:12:25.045 | 1.00th=[14222], 5.00th=[15664], 10.00th=[15926], 20.00th=[16712], 00:12:25.045 | 30.00th=[21890], 40.00th=[22676], 50.00th=[22938], 60.00th=[23200], 00:12:25.045 | 70.00th=[23462], 80.00th=[24249], 90.00th=[27395], 95.00th=[30540], 00:12:25.045 | 99.00th=[35914], 99.50th=[35914], 99.90th=[36439], 99.95th=[36439], 00:12:25.045 | 99.99th=[38011] 00:12:25.045 write: IOPS=2367, BW=9468KiB/s (9695kB/s)(9544KiB/1008msec); 0 zone resets 00:12:25.045 slat (usec): min=13, max=8791, avg=253.55, stdev=925.49 00:12:25.045 clat (usec): min=6738, max=99800, avg=33945.09, stdev=16227.07 00:12:25.045 lat (usec): min=8129, max=99852, avg=34198.64, stdev=16332.80 00:12:25.045 clat percentiles (msec): 00:12:25.045 | 1.00th=[ 12], 5.00th=[ 22], 10.00th=[ 24], 20.00th=[ 24], 00:12:25.045 | 30.00th=[ 25], 40.00th=[ 27], 50.00th=[ 28], 60.00th=[ 29], 00:12:25.045 | 70.00th=[ 36], 80.00th=[ 43], 90.00th=[ 48], 95.00th=[ 73], 00:12:25.045 | 99.00th=[ 99], 99.50th=[ 100], 99.90th=[ 101], 99.95th=[ 101], 00:12:25.045 | 99.99th=[ 101] 00:12:25.045 bw ( KiB/s): min= 8175, max= 9860, per=20.77%, avg=9017.50, stdev=1191.47, samples=2 00:12:25.045 iops : min= 2043, max= 2465, avg=2254.00, stdev=298.40, samples=2 00:12:25.045 lat (msec) : 10=0.38%, 20=12.74%, 50=81.89%, 100=4.98% 00:12:25.045 cpu : usr=1.69%, sys=8.64%, ctx=338, majf=0, minf=19 00:12:25.045 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:12:25.045 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:25.045 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:25.045 issued rwts: total=2048,2386,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:25.045 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:25.045 job2: (groupid=0, jobs=1): err= 0: pid=68933: Mon Nov 18 23:56:31 2024 00:12:25.045 read: IOPS=4904, BW=19.2MiB/s (20.1MB/s)(19.2MiB/1004msec) 00:12:25.045 slat (usec): min=8, max=5484, avg=99.90, stdev=435.23 00:12:25.045 clat (usec): min=1039, max=18636, avg=12995.10, stdev=1426.28 00:12:25.045 lat (usec): min=3772, max=18759, avg=13095.00, stdev=1425.57 00:12:25.045 clat percentiles (usec): 00:12:25.045 | 1.00th=[ 6259], 5.00th=[11076], 10.00th=[11600], 20.00th=[12649], 00:12:25.045 | 30.00th=[12780], 40.00th=[13042], 50.00th=[13173], 60.00th=[13173], 00:12:25.045 | 70.00th=[13435], 80.00th=[13566], 90.00th=[14091], 95.00th=[14746], 00:12:25.045 | 99.00th=[16712], 99.50th=[17433], 99.90th=[17957], 99.95th=[17957], 00:12:25.045 | 99.99th=[18744] 00:12:25.045 write: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec); 0 zone resets 00:12:25.045 slat (usec): min=11, max=5211, avg=91.42, stdev=505.74 00:12:25.045 clat (usec): min=5439, max=18983, avg=12283.78, stdev=1221.84 00:12:25.045 lat (usec): min=5474, max=19002, avg=12375.20, stdev=1306.96 00:12:25.045 clat percentiles (usec): 00:12:25.045 | 1.00th=[ 8979], 5.00th=[10552], 10.00th=[11338], 20.00th=[11731], 00:12:25.045 | 30.00th=[11863], 40.00th=[12125], 50.00th=[12125], 60.00th=[12256], 00:12:25.045 | 70.00th=[12518], 80.00th=[12780], 90.00th=[13566], 95.00th=[13960], 00:12:25.045 | 99.00th=[16712], 99.50th=[17695], 99.90th=[18482], 99.95th=[19006], 00:12:25.045 | 99.99th=[19006] 00:12:25.045 bw ( KiB/s): min=20439, max=20480, per=47.13%, avg=20459.50, stdev=28.99, samples=2 00:12:25.046 iops : min= 5109, max= 5120, avg=5114.50, stdev= 7.78, samples=2 00:12:25.046 lat (msec) : 2=0.01%, 4=0.10%, 10=3.00%, 20=96.89% 00:12:25.046 cpu : usr=4.79%, sys=14.36%, ctx=372, majf=0, minf=18 00:12:25.046 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:12:25.046 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:25.046 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:25.046 issued rwts: total=4924,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:25.046 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:25.046 job3: (groupid=0, jobs=1): err= 0: pid=68934: Mon Nov 18 23:56:31 2024 00:12:25.046 read: IOPS=1520, BW=6083KiB/s (6229kB/s)(6144KiB/1010msec) 00:12:25.046 slat (usec): min=8, max=12142, avg=308.85, stdev=1139.88 00:12:25.046 clat (usec): min=23395, max=52831, avg=38840.95, stdev=6183.45 00:12:25.046 lat (usec): min=23639, max=52848, avg=39149.80, stdev=6136.53 00:12:25.046 clat percentiles (usec): 00:12:25.046 | 1.00th=[23725], 5.00th=[28705], 10.00th=[30802], 20.00th=[33424], 00:12:25.046 | 30.00th=[34341], 40.00th=[36439], 50.00th=[38536], 60.00th=[41157], 00:12:25.046 | 70.00th=[43779], 80.00th=[45351], 90.00th=[46400], 95.00th=[46924], 00:12:25.046 | 99.00th=[50594], 99.50th=[52691], 99.90th=[52691], 99.95th=[52691], 00:12:25.046 | 99.99th=[52691] 00:12:25.046 write: IOPS=1737, BW=6950KiB/s (7117kB/s)(7020KiB/1010msec); 0 zone resets 00:12:25.046 slat (usec): min=12, max=8566, avg=290.39, stdev=1039.12 00:12:25.046 clat (usec): min=9762, max=77365, avg=38246.59, stdev=13288.46 00:12:25.046 lat (usec): min=12547, max=77545, avg=38536.98, stdev=13332.01 00:12:25.046 clat percentiles (usec): 00:12:25.046 | 1.00th=[15533], 5.00th=[22938], 10.00th=[25822], 20.00th=[26608], 00:12:25.046 | 30.00th=[27657], 40.00th=[34341], 50.00th=[35914], 60.00th=[36963], 00:12:25.046 | 70.00th=[42730], 80.00th=[47973], 90.00th=[56361], 95.00th=[68682], 00:12:25.046 | 99.00th=[77071], 99.50th=[77071], 99.90th=[77071], 99.95th=[77071], 00:12:25.046 | 99.99th=[77071] 00:12:25.046 bw ( KiB/s): min= 4822, max= 8192, per=14.99%, avg=6507.00, stdev=2382.95, samples=2 00:12:25.046 iops : min= 1205, max= 2048, avg=1626.50, stdev=596.09, samples=2 00:12:25.046 lat (msec) : 10=0.03%, 20=0.85%, 50=90.67%, 100=8.45% 00:12:25.046 cpu : usr=2.08%, sys=5.85%, ctx=446, majf=0, minf=3 00:12:25.046 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:12:25.046 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:25.046 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:25.046 issued rwts: total=1536,1755,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:25.046 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:25.046 00:12:25.046 Run status group 0 (all jobs): 00:12:25.046 READ: bw=38.8MiB/s (40.7MB/s), 6083KiB/s-19.2MiB/s (6229kB/s-20.1MB/s), io=39.2MiB (41.1MB), run=1004-1010msec 00:12:25.046 WRITE: bw=42.4MiB/s (44.4MB/s), 6735KiB/s-19.9MiB/s (6897kB/s-20.9MB/s), io=42.8MiB (44.9MB), run=1004-1010msec 00:12:25.046 00:12:25.046 Disk stats (read/write): 00:12:25.046 nvme0n1: ios=1331/1536, merge=0/0, ticks=12100/13581, in_queue=25681, util=88.97% 00:12:25.046 nvme0n2: ios=1710/2048, merge=0/0, ticks=12287/23026, in_queue=35313, util=90.30% 00:12:25.046 nvme0n3: ios=4117/4608, merge=0/0, ticks=25573/23801, in_queue=49374, util=89.86% 00:12:25.046 nvme0n4: ios=1373/1536, merge=0/0, ticks=12785/13318, in_queue=26103, util=90.23% 00:12:25.046 23:56:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:12:25.046 [global] 00:12:25.046 thread=1 00:12:25.046 invalidate=1 00:12:25.046 rw=randwrite 00:12:25.046 time_based=1 00:12:25.046 runtime=1 00:12:25.046 ioengine=libaio 00:12:25.046 direct=1 00:12:25.046 bs=4096 00:12:25.046 iodepth=128 00:12:25.046 norandommap=0 00:12:25.046 numjobs=1 00:12:25.046 00:12:25.046 verify_dump=1 00:12:25.046 verify_backlog=512 00:12:25.046 verify_state_save=0 00:12:25.046 do_verify=1 00:12:25.046 verify=crc32c-intel 00:12:25.046 [job0] 00:12:25.046 filename=/dev/nvme0n1 00:12:25.046 [job1] 00:12:25.046 filename=/dev/nvme0n2 00:12:25.046 [job2] 00:12:25.046 filename=/dev/nvme0n3 00:12:25.046 [job3] 00:12:25.046 filename=/dev/nvme0n4 00:12:25.046 Could not set queue depth (nvme0n1) 00:12:25.046 Could not set queue depth (nvme0n2) 00:12:25.046 Could not set queue depth (nvme0n3) 00:12:25.046 Could not set queue depth (nvme0n4) 00:12:25.046 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:25.046 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:25.046 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:25.046 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:25.046 fio-3.35 00:12:25.046 Starting 4 threads 00:12:26.424 00:12:26.424 job0: (groupid=0, jobs=1): err= 0: pid=68994: Mon Nov 18 23:56:32 2024 00:12:26.424 read: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec) 00:12:26.424 slat (usec): min=7, max=6144, avg=83.68, stdev=511.13 00:12:26.424 clat (usec): min=7069, max=21268, avg=11857.11, stdev=1436.81 00:12:26.424 lat (usec): min=7083, max=25232, avg=11940.80, stdev=1458.48 00:12:26.424 clat percentiles (usec): 00:12:26.424 | 1.00th=[ 7635], 5.00th=[10290], 10.00th=[10814], 20.00th=[11076], 00:12:26.424 | 30.00th=[11338], 40.00th=[11469], 50.00th=[11731], 60.00th=[11994], 00:12:26.424 | 70.00th=[12125], 80.00th=[12518], 90.00th=[13435], 95.00th=[13960], 00:12:26.424 | 99.00th=[17957], 99.50th=[18220], 99.90th=[21103], 99.95th=[21103], 00:12:26.424 | 99.99th=[21365] 00:12:26.424 write: IOPS=5693, BW=22.2MiB/s (23.3MB/s)(22.3MiB/1003msec); 0 zone resets 00:12:26.424 slat (usec): min=8, max=7088, avg=84.51, stdev=475.18 00:12:26.424 clat (usec): min=1163, max=16314, avg=10556.64, stdev=1127.63 00:12:26.424 lat (usec): min=5200, max=16605, avg=10641.15, stdev=1045.26 00:12:26.424 clat percentiles (usec): 00:12:26.424 | 1.00th=[ 6456], 5.00th=[ 9372], 10.00th=[ 9765], 20.00th=[10028], 00:12:26.424 | 30.00th=[10159], 40.00th=[10421], 50.00th=[10552], 60.00th=[10683], 00:12:26.424 | 70.00th=[10814], 80.00th=[11076], 90.00th=[11600], 95.00th=[12649], 00:12:26.424 | 99.00th=[14222], 99.50th=[14484], 99.90th=[16319], 99.95th=[16319], 00:12:26.424 | 99.99th=[16319] 00:12:26.424 bw ( KiB/s): min=20480, max=24576, per=48.77%, avg=22528.00, stdev=2896.31, samples=2 00:12:26.424 iops : min= 5120, max= 6144, avg=5632.00, stdev=724.08, samples=2 00:12:26.424 lat (msec) : 2=0.01%, 10=12.99%, 20=86.93%, 50=0.07% 00:12:26.424 cpu : usr=4.89%, sys=15.77%, ctx=244, majf=0, minf=1 00:12:26.424 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:12:26.424 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:26.424 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:26.424 issued rwts: total=5632,5711,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:26.424 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:26.424 job1: (groupid=0, jobs=1): err= 0: pid=68995: Mon Nov 18 23:56:32 2024 00:12:26.424 read: IOPS=1517, BW=6071KiB/s (6217kB/s)(6144KiB/1012msec) 00:12:26.424 slat (usec): min=7, max=20052, avg=336.72, stdev=1514.46 00:12:26.424 clat (msec): min=17, max=102, avg=43.39, stdev=18.56 00:12:26.424 lat (msec): min=17, max=102, avg=43.73, stdev=18.68 00:12:26.424 clat percentiles (msec): 00:12:26.424 | 1.00th=[ 18], 5.00th=[ 26], 10.00th=[ 28], 20.00th=[ 29], 00:12:26.424 | 30.00th=[ 32], 40.00th=[ 35], 50.00th=[ 36], 60.00th=[ 40], 00:12:26.425 | 70.00th=[ 51], 80.00th=[ 57], 90.00th=[ 73], 95.00th=[ 81], 00:12:26.425 | 99.00th=[ 99], 99.50th=[ 101], 99.90th=[ 101], 99.95th=[ 103], 00:12:26.425 | 99.99th=[ 103] 00:12:26.425 write: IOPS=1637, BW=6549KiB/s (6707kB/s)(6628KiB/1012msec); 0 zone resets 00:12:26.425 slat (usec): min=7, max=20490, avg=281.83, stdev=1467.54 00:12:26.425 clat (usec): min=11284, max=89710, avg=34708.97, stdev=11908.51 00:12:26.425 lat (usec): min=12047, max=89747, avg=34990.80, stdev=12000.47 00:12:26.425 clat percentiles (usec): 00:12:26.425 | 1.00th=[15401], 5.00th=[22414], 10.00th=[25297], 20.00th=[27657], 00:12:26.425 | 30.00th=[30540], 40.00th=[31065], 50.00th=[31851], 60.00th=[32375], 00:12:26.425 | 70.00th=[35914], 80.00th=[39584], 90.00th=[48497], 95.00th=[52167], 00:12:26.425 | 99.00th=[83362], 99.50th=[86508], 99.90th=[88605], 99.95th=[89654], 00:12:26.425 | 99.99th=[89654] 00:12:26.425 bw ( KiB/s): min= 4311, max= 7968, per=13.29%, avg=6139.50, stdev=2585.89, samples=2 00:12:26.425 iops : min= 1077, max= 1992, avg=1534.50, stdev=647.00, samples=2 00:12:26.425 lat (msec) : 20=3.16%, 50=79.08%, 100=17.73%, 250=0.03% 00:12:26.425 cpu : usr=2.18%, sys=5.04%, ctx=274, majf=0, minf=2 00:12:26.425 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:12:26.425 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:26.425 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:26.425 issued rwts: total=1536,1657,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:26.425 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:26.425 job2: (groupid=0, jobs=1): err= 0: pid=68996: Mon Nov 18 23:56:32 2024 00:12:26.425 read: IOPS=2479, BW=9916KiB/s (10.2MB/s)(9956KiB/1004msec) 00:12:26.425 slat (usec): min=7, max=17416, avg=210.51, stdev=1117.02 00:12:26.425 clat (usec): min=1453, max=90349, avg=25348.94, stdev=12546.16 00:12:26.425 lat (usec): min=4510, max=90369, avg=25559.45, stdev=12643.10 00:12:26.425 clat percentiles (usec): 00:12:26.425 | 1.00th=[ 5080], 5.00th=[14353], 10.00th=[14877], 20.00th=[15270], 00:12:26.425 | 30.00th=[16319], 40.00th=[20055], 50.00th=[23200], 60.00th=[25035], 00:12:26.425 | 70.00th=[25822], 80.00th=[33162], 90.00th=[46400], 95.00th=[47973], 00:12:26.425 | 99.00th=[70779], 99.50th=[82314], 99.90th=[90702], 99.95th=[90702], 00:12:26.425 | 99.99th=[90702] 00:12:26.425 write: IOPS=2549, BW=9.96MiB/s (10.4MB/s)(10.0MiB/1004msec); 0 zone resets 00:12:26.425 slat (usec): min=12, max=16368, avg=177.05, stdev=970.50 00:12:26.425 clat (usec): min=7398, max=98449, avg=24934.85, stdev=16555.21 00:12:26.425 lat (msec): min=9, max=101, avg=25.11, stdev=16.64 00:12:26.425 clat percentiles (usec): 00:12:26.425 | 1.00th=[11207], 5.00th=[12256], 10.00th=[12911], 20.00th=[13960], 00:12:26.425 | 30.00th=[14353], 40.00th=[15008], 50.00th=[17957], 60.00th=[22938], 00:12:26.425 | 70.00th=[24773], 80.00th=[36963], 90.00th=[47449], 95.00th=[51119], 00:12:26.425 | 99.00th=[92799], 99.50th=[94897], 99.90th=[98042], 99.95th=[98042], 00:12:26.425 | 99.99th=[98042] 00:12:26.425 bw ( KiB/s): min= 8192, max=12312, per=22.19%, avg=10252.00, stdev=2913.28, samples=2 00:12:26.425 iops : min= 2048, max= 3078, avg=2563.00, stdev=728.32, samples=2 00:12:26.425 lat (msec) : 2=0.02%, 10=1.53%, 20=46.94%, 50=47.34%, 100=4.18% 00:12:26.425 cpu : usr=2.79%, sys=8.08%, ctx=198, majf=0, minf=1 00:12:26.425 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:12:26.425 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:26.425 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:26.425 issued rwts: total=2489,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:26.425 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:26.425 job3: (groupid=0, jobs=1): err= 0: pid=68997: Mon Nov 18 23:56:32 2024 00:12:26.425 read: IOPS=1513, BW=6053KiB/s (6198kB/s)(6144KiB/1015msec) 00:12:26.425 slat (usec): min=8, max=36482, avg=332.82, stdev=1756.16 00:12:26.425 clat (msec): min=19, max=103, avg=44.16, stdev=19.13 00:12:26.425 lat (msec): min=19, max=103, avg=44.49, stdev=19.25 00:12:26.425 clat percentiles (msec): 00:12:26.425 | 1.00th=[ 23], 5.00th=[ 27], 10.00th=[ 28], 20.00th=[ 29], 00:12:26.425 | 30.00th=[ 31], 40.00th=[ 35], 50.00th=[ 36], 60.00th=[ 41], 00:12:26.425 | 70.00th=[ 50], 80.00th=[ 66], 90.00th=[ 75], 95.00th=[ 82], 00:12:26.425 | 99.00th=[ 97], 99.50th=[ 97], 99.90th=[ 104], 99.95th=[ 105], 00:12:26.425 | 99.99th=[ 105] 00:12:26.425 write: IOPS=1767, BW=7070KiB/s (7240kB/s)(7176KiB/1015msec); 0 zone resets 00:12:26.425 slat (usec): min=6, max=21312, avg=263.25, stdev=1375.71 00:12:26.425 clat (usec): min=14670, max=60267, avg=33900.81, stdev=9805.79 00:12:26.425 lat (usec): min=15276, max=61320, avg=34164.06, stdev=9784.11 00:12:26.425 clat percentiles (usec): 00:12:26.425 | 1.00th=[15664], 5.00th=[17171], 10.00th=[20317], 20.00th=[27919], 00:12:26.425 | 30.00th=[29754], 40.00th=[31327], 50.00th=[32375], 60.00th=[34866], 00:12:26.425 | 70.00th=[36439], 80.00th=[42730], 90.00th=[47449], 95.00th=[50594], 00:12:26.425 | 99.00th=[60031], 99.50th=[60031], 99.90th=[60031], 99.95th=[60031], 00:12:26.425 | 99.99th=[60031] 00:12:26.425 bw ( KiB/s): min= 5026, max= 8320, per=14.45%, avg=6673.00, stdev=2329.21, samples=2 00:12:26.425 iops : min= 1256, max= 2080, avg=1668.00, stdev=582.66, samples=2 00:12:26.425 lat (msec) : 20=5.29%, 50=78.11%, 100=16.52%, 250=0.09% 00:12:26.425 cpu : usr=1.97%, sys=5.52%, ctx=301, majf=0, minf=7 00:12:26.425 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:12:26.425 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:26.425 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:26.425 issued rwts: total=1536,1794,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:26.425 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:26.425 00:12:26.425 Run status group 0 (all jobs): 00:12:26.425 READ: bw=43.1MiB/s (45.2MB/s), 6053KiB/s-21.9MiB/s (6198kB/s-23.0MB/s), io=43.7MiB (45.8MB), run=1003-1015msec 00:12:26.425 WRITE: bw=45.1MiB/s (47.3MB/s), 6549KiB/s-22.2MiB/s (6707kB/s-23.3MB/s), io=45.8MiB (48.0MB), run=1003-1015msec 00:12:26.425 00:12:26.425 Disk stats (read/write): 00:12:26.425 nvme0n1: ios=4858/5120, merge=0/0, ticks=52237/48905, in_queue=101142, util=88.87% 00:12:26.425 nvme0n2: ios=1073/1521, merge=0/0, ticks=26282/24301, in_queue=50583, util=85.93% 00:12:26.425 nvme0n3: ios=1665/2048, merge=0/0, ticks=27067/24332, in_queue=51399, util=89.29% 00:12:26.425 nvme0n4: ios=1094/1536, merge=0/0, ticks=26985/26198, in_queue=53183, util=89.64% 00:12:26.425 23:56:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:12:26.425 23:56:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=69010 00:12:26.425 23:56:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:12:26.425 23:56:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:12:26.425 [global] 00:12:26.425 thread=1 00:12:26.425 invalidate=1 00:12:26.425 rw=read 00:12:26.425 time_based=1 00:12:26.425 runtime=10 00:12:26.425 ioengine=libaio 00:12:26.425 direct=1 00:12:26.425 bs=4096 00:12:26.425 iodepth=1 00:12:26.425 norandommap=1 00:12:26.425 numjobs=1 00:12:26.425 00:12:26.425 [job0] 00:12:26.425 filename=/dev/nvme0n1 00:12:26.425 [job1] 00:12:26.425 filename=/dev/nvme0n2 00:12:26.425 [job2] 00:12:26.425 filename=/dev/nvme0n3 00:12:26.425 [job3] 00:12:26.425 filename=/dev/nvme0n4 00:12:26.425 Could not set queue depth (nvme0n1) 00:12:26.425 Could not set queue depth (nvme0n2) 00:12:26.425 Could not set queue depth (nvme0n3) 00:12:26.425 Could not set queue depth (nvme0n4) 00:12:26.425 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:26.425 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:26.425 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:26.425 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:26.425 fio-3.35 00:12:26.425 Starting 4 threads 00:12:29.712 23:56:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:12:29.712 fio: pid=69053, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:29.712 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=32649216, buflen=4096 00:12:29.712 23:56:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:12:29.712 fio: pid=69052, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:29.712 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=57360384, buflen=4096 00:12:29.712 23:56:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:29.712 23:56:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:12:29.971 fio: pid=69050, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:29.971 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=40677376, buflen=4096 00:12:30.229 23:56:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:30.229 23:56:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:12:30.489 fio: pid=69051, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:30.489 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=8216576, buflen=4096 00:12:30.489 00:12:30.489 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=69050: Mon Nov 18 23:56:37 2024 00:12:30.489 read: IOPS=2779, BW=10.9MiB/s (11.4MB/s)(38.8MiB/3573msec) 00:12:30.489 slat (usec): min=8, max=12085, avg=26.41, stdev=198.42 00:12:30.489 clat (usec): min=156, max=2922, avg=331.40, stdev=66.73 00:12:30.489 lat (usec): min=175, max=12344, avg=357.81, stdev=208.63 00:12:30.489 clat percentiles (usec): 00:12:30.489 | 1.00th=[ 184], 5.00th=[ 245], 10.00th=[ 269], 20.00th=[ 314], 00:12:30.489 | 30.00th=[ 322], 40.00th=[ 330], 50.00th=[ 338], 60.00th=[ 343], 00:12:30.489 | 70.00th=[ 351], 80.00th=[ 359], 90.00th=[ 367], 95.00th=[ 379], 00:12:30.489 | 99.00th=[ 416], 99.50th=[ 537], 99.90th=[ 1090], 99.95th=[ 1450], 00:12:30.489 | 99.99th=[ 2933] 00:12:30.489 bw ( KiB/s): min=10815, max=10984, per=21.53%, avg=10870.17, stdev=70.09, samples=6 00:12:30.489 iops : min= 2703, max= 2746, avg=2717.33, stdev=17.57, samples=6 00:12:30.489 lat (usec) : 250=5.86%, 500=93.55%, 750=0.38%, 1000=0.09% 00:12:30.489 lat (msec) : 2=0.08%, 4=0.03% 00:12:30.489 cpu : usr=1.18%, sys=5.43%, ctx=9940, majf=0, minf=1 00:12:30.489 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:30.489 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:30.489 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:30.489 issued rwts: total=9932,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:30.489 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:30.489 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=69051: Mon Nov 18 23:56:37 2024 00:12:30.489 read: IOPS=4616, BW=18.0MiB/s (18.9MB/s)(71.8MiB/3984msec) 00:12:30.489 slat (usec): min=8, max=17820, avg=17.50, stdev=201.69 00:12:30.489 clat (usec): min=141, max=3190, avg=197.69, stdev=59.92 00:12:30.489 lat (usec): min=154, max=18037, avg=215.19, stdev=211.35 00:12:30.489 clat percentiles (usec): 00:12:30.489 | 1.00th=[ 163], 5.00th=[ 169], 10.00th=[ 174], 20.00th=[ 178], 00:12:30.489 | 30.00th=[ 180], 40.00th=[ 184], 50.00th=[ 186], 60.00th=[ 192], 00:12:30.489 | 70.00th=[ 196], 80.00th=[ 202], 90.00th=[ 221], 95.00th=[ 273], 00:12:30.489 | 99.00th=[ 367], 99.50th=[ 383], 99.90th=[ 510], 99.95th=[ 1004], 00:12:30.489 | 99.99th=[ 3130] 00:12:30.489 bw ( KiB/s): min=12745, max=19776, per=36.55%, avg=18457.86, stdev=2555.50, samples=7 00:12:30.489 iops : min= 3186, max= 4944, avg=4614.43, stdev=638.97, samples=7 00:12:30.489 lat (usec) : 250=92.47%, 500=7.42%, 750=0.04%, 1000=0.01% 00:12:30.489 lat (msec) : 2=0.03%, 4=0.03% 00:12:30.489 cpu : usr=1.68%, sys=5.55%, ctx=18405, majf=0, minf=2 00:12:30.489 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:30.489 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:30.489 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:30.489 issued rwts: total=18391,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:30.489 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:30.489 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=69052: Mon Nov 18 23:56:37 2024 00:12:30.489 read: IOPS=4269, BW=16.7MiB/s (17.5MB/s)(54.7MiB/3280msec) 00:12:30.489 slat (usec): min=8, max=14850, avg=16.19, stdev=158.20 00:12:30.489 clat (usec): min=178, max=2742, avg=216.66, stdev=53.08 00:12:30.489 lat (usec): min=191, max=15410, avg=232.85, stdev=169.75 00:12:30.489 clat percentiles (usec): 00:12:30.489 | 1.00th=[ 186], 5.00th=[ 190], 10.00th=[ 192], 20.00th=[ 196], 00:12:30.489 | 30.00th=[ 200], 40.00th=[ 204], 50.00th=[ 208], 60.00th=[ 212], 00:12:30.489 | 70.00th=[ 217], 80.00th=[ 223], 90.00th=[ 235], 95.00th=[ 302], 00:12:30.489 | 99.00th=[ 379], 99.50th=[ 392], 99.90th=[ 570], 99.95th=[ 996], 00:12:30.489 | 99.99th=[ 2311] 00:12:30.489 bw ( KiB/s): min=17574, max=18072, per=35.14%, avg=17743.17, stdev=220.69, samples=6 00:12:30.489 iops : min= 4393, max= 4518, avg=4435.67, stdev=55.28, samples=6 00:12:30.489 lat (usec) : 250=94.09%, 500=5.76%, 750=0.06%, 1000=0.04% 00:12:30.489 lat (msec) : 2=0.03%, 4=0.02% 00:12:30.489 cpu : usr=1.28%, sys=5.52%, ctx=14009, majf=0, minf=2 00:12:30.489 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:30.489 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:30.489 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:30.489 issued rwts: total=14005,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:30.489 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:30.489 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=69053: Mon Nov 18 23:56:37 2024 00:12:30.489 read: IOPS=2701, BW=10.6MiB/s (11.1MB/s)(31.1MiB/2951msec) 00:12:30.489 slat (nsec): min=14808, max=72228, avg=22287.92, stdev=4342.22 00:12:30.489 clat (usec): min=191, max=2965, avg=345.34, stdev=52.66 00:12:30.489 lat (usec): min=207, max=2987, avg=367.63, stdev=52.95 00:12:30.489 clat percentiles (usec): 00:12:30.489 | 1.00th=[ 297], 5.00th=[ 310], 10.00th=[ 318], 20.00th=[ 326], 00:12:30.489 | 30.00th=[ 330], 40.00th=[ 338], 50.00th=[ 343], 60.00th=[ 347], 00:12:30.489 | 70.00th=[ 351], 80.00th=[ 359], 90.00th=[ 371], 95.00th=[ 379], 00:12:30.489 | 99.00th=[ 474], 99.50th=[ 553], 99.90th=[ 898], 99.95th=[ 1434], 00:12:30.489 | 99.99th=[ 2966] 00:12:30.489 bw ( KiB/s): min=10768, max=10954, per=21.50%, avg=10855.60, stdev=80.13, samples=5 00:12:30.489 iops : min= 2692, max= 2738, avg=2713.80, stdev=19.88, samples=5 00:12:30.489 lat (usec) : 250=0.31%, 500=98.85%, 750=0.68%, 1000=0.09% 00:12:30.489 lat (msec) : 2=0.05%, 4=0.01% 00:12:30.489 cpu : usr=1.29%, sys=5.39%, ctx=7974, majf=0, minf=2 00:12:30.489 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:30.489 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:30.489 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:30.489 issued rwts: total=7972,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:30.489 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:30.489 00:12:30.489 Run status group 0 (all jobs): 00:12:30.489 READ: bw=49.3MiB/s (51.7MB/s), 10.6MiB/s-18.0MiB/s (11.1MB/s-18.9MB/s), io=196MiB (206MB), run=2951-3984msec 00:12:30.489 00:12:30.489 Disk stats (read/write): 00:12:30.489 nvme0n1: ios=9205/0, merge=0/0, ticks=3132/0, in_queue=3132, util=95.37% 00:12:30.489 nvme0n2: ios=17815/0, merge=0/0, ticks=3580/0, in_queue=3580, util=95.49% 00:12:30.489 nvme0n3: ios=13576/0, merge=0/0, ticks=2942/0, in_queue=2942, util=96.12% 00:12:30.489 nvme0n4: ios=7770/0, merge=0/0, ticks=2725/0, in_queue=2725, util=96.80% 00:12:30.748 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:30.748 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:12:31.006 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:31.006 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:12:31.574 23:56:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:31.574 23:56:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:12:31.833 23:56:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:31.833 23:56:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:12:32.401 23:56:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:32.401 23:56:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:12:32.666 23:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:12:32.666 23:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 69010 00:12:32.666 23:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:12:32.666 23:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:32.929 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:32.929 23:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:32.929 23:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:12:32.929 23:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:32.929 23:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:32.929 23:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:32.929 23:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:32.929 nvmf hotplug test: fio failed as expected 00:12:32.929 23:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:12:32.929 23:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:12:32.929 23:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:12:32.929 23:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:33.189 23:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:12:33.189 23:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:12:33.189 23:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:12:33.189 23:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:12:33.189 23:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:12:33.189 23:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:33.189 23:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:12:33.189 23:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:33.189 23:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:12:33.189 23:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:33.189 23:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:33.189 rmmod nvme_tcp 00:12:33.189 rmmod nvme_fabrics 00:12:33.189 rmmod nvme_keyring 00:12:33.189 23:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:33.189 23:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:12:33.189 23:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:12:33.189 23:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 68621 ']' 00:12:33.189 23:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 68621 00:12:33.189 23:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 68621 ']' 00:12:33.189 23:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 68621 00:12:33.189 23:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:12:33.189 23:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:33.189 23:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68621 00:12:33.189 killing process with pid 68621 00:12:33.189 23:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:33.189 23:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:33.189 23:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68621' 00:12:33.189 23:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 68621 00:12:33.189 23:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 68621 00:12:34.567 23:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:34.567 23:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:34.567 23:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:34.567 23:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:12:34.567 23:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:12:34.567 23:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:34.567 23:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:12:34.567 23:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:34.567 23:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:12:34.567 23:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:12:34.567 23:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:12:34.567 23:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:12:34.567 23:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:12:34.567 23:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:12:34.567 23:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:12:34.567 23:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:12:34.567 23:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:12:34.567 23:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:12:34.567 23:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:12:34.567 23:56:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:12:34.567 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:34.567 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:34.567 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:12:34.567 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:34.567 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:34.567 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:34.567 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@300 -- # return 0 00:12:34.567 00:12:34.567 real 0m22.395s 00:12:34.567 user 1m22.512s 00:12:34.567 sys 0m10.895s 00:12:34.567 ************************************ 00:12:34.567 END TEST nvmf_fio_target 00:12:34.567 ************************************ 00:12:34.567 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:34.567 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.567 23:56:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:12:34.567 23:56:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:34.567 23:56:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:34.568 23:56:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:34.568 ************************************ 00:12:34.568 START TEST nvmf_bdevio 00:12:34.568 ************************************ 00:12:34.568 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:12:34.568 * Looking for test storage... 00:12:34.568 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:34.568 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:34.568 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:12:34.568 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:34.827 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:34.827 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:34.827 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:34.827 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:34.827 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:12:34.827 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:12:34.827 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:12:34.827 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:12:34.827 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:12:34.827 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:12:34.827 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:12:34.827 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:34.827 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:12:34.827 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:12:34.827 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:34.827 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:34.827 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:12:34.827 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:12:34.827 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:34.827 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:12:34.827 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:12:34.827 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:12:34.827 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:12:34.827 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:34.827 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:12:34.827 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:12:34.827 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:34.827 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:34.827 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:12:34.827 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:34.827 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:34.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:34.827 --rc genhtml_branch_coverage=1 00:12:34.827 --rc genhtml_function_coverage=1 00:12:34.827 --rc genhtml_legend=1 00:12:34.827 --rc geninfo_all_blocks=1 00:12:34.827 --rc geninfo_unexecuted_blocks=1 00:12:34.827 00:12:34.827 ' 00:12:34.827 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:34.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:34.827 --rc genhtml_branch_coverage=1 00:12:34.827 --rc genhtml_function_coverage=1 00:12:34.827 --rc genhtml_legend=1 00:12:34.827 --rc geninfo_all_blocks=1 00:12:34.827 --rc geninfo_unexecuted_blocks=1 00:12:34.827 00:12:34.827 ' 00:12:34.827 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:34.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:34.827 --rc genhtml_branch_coverage=1 00:12:34.827 --rc genhtml_function_coverage=1 00:12:34.827 --rc genhtml_legend=1 00:12:34.827 --rc geninfo_all_blocks=1 00:12:34.827 --rc geninfo_unexecuted_blocks=1 00:12:34.827 00:12:34.827 ' 00:12:34.827 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:34.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:34.827 --rc genhtml_branch_coverage=1 00:12:34.827 --rc genhtml_function_coverage=1 00:12:34.827 --rc genhtml_legend=1 00:12:34.827 --rc geninfo_all_blocks=1 00:12:34.827 --rc geninfo_unexecuted_blocks=1 00:12:34.827 00:12:34.827 ' 00:12:34.827 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:34.827 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:12:34.827 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:34.827 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:34.827 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:34.827 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:34.827 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:34.827 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:34.827 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:34.827 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:34.827 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:34.827 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:34.827 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:12:34.827 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:12:34.827 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:34.827 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:34.827 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:34.827 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:34.827 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:34.827 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:12:34.827 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:34.827 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:34.827 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:34.827 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.827 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.827 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.827 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:12:34.828 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.828 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:12:34.828 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:34.828 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:34.828 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:34.828 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:34.828 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:34.828 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:34.828 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:34.828 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:34.828 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:34.828 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:34.828 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:34.828 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:34.828 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:12:34.828 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:34.828 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:34.828 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:34.828 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:34.828 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:34.828 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:34.828 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:34.828 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:34.828 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:12:34.828 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:12:34.828 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:12:34.828 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:12:34.828 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:12:34.828 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@460 -- # nvmf_veth_init 00:12:34.828 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:34.828 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:34.828 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:34.828 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:34.828 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:34.828 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:34.828 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:34.828 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:34.828 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:34.828 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:34.828 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:34.828 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:34.828 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:34.828 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:34.828 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:34.828 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:34.828 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:34.828 Cannot find device "nvmf_init_br" 00:12:34.828 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:12:34.828 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:34.828 Cannot find device "nvmf_init_br2" 00:12:34.828 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:12:34.828 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:34.828 Cannot find device "nvmf_tgt_br" 00:12:34.828 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # true 00:12:34.828 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:34.828 Cannot find device "nvmf_tgt_br2" 00:12:34.828 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # true 00:12:34.828 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:34.828 Cannot find device "nvmf_init_br" 00:12:34.828 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # true 00:12:34.828 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:34.828 Cannot find device "nvmf_init_br2" 00:12:34.828 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # true 00:12:34.828 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:34.828 Cannot find device "nvmf_tgt_br" 00:12:34.828 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # true 00:12:34.828 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:34.828 Cannot find device "nvmf_tgt_br2" 00:12:34.828 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # true 00:12:34.828 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:34.828 Cannot find device "nvmf_br" 00:12:34.828 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # true 00:12:34.828 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:34.828 Cannot find device "nvmf_init_if" 00:12:34.828 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # true 00:12:34.828 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:34.828 Cannot find device "nvmf_init_if2" 00:12:34.828 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # true 00:12:34.828 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:34.828 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:34.828 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # true 00:12:34.828 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:34.828 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:34.828 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # true 00:12:34.828 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:34.828 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:35.087 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:35.087 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:35.087 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:35.087 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:35.087 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:35.087 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:35.087 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:35.087 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:35.087 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:35.087 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:35.087 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:35.087 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:35.087 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:35.087 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:35.087 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:35.087 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:35.087 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:35.087 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:35.087 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:35.087 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:35.087 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:35.087 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:35.087 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:35.087 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:35.087 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:35.087 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:35.087 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:35.087 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:35.087 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:35.087 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:35.087 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:35.087 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:35.087 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.083 ms 00:12:35.087 00:12:35.087 --- 10.0.0.3 ping statistics --- 00:12:35.087 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:35.087 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:12:35.087 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:35.087 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:35.087 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.065 ms 00:12:35.087 00:12:35.087 --- 10.0.0.4 ping statistics --- 00:12:35.087 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:35.087 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:12:35.087 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:35.087 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:35.087 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:12:35.087 00:12:35.087 --- 10.0.0.1 ping statistics --- 00:12:35.087 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:35.087 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:12:35.088 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:35.088 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:35.088 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:12:35.088 00:12:35.088 --- 10.0.0.2 ping statistics --- 00:12:35.088 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:35.088 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:12:35.088 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:35.088 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@461 -- # return 0 00:12:35.088 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:35.088 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:35.088 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:35.088 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:35.088 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:35.088 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:35.088 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:35.088 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:12:35.088 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:35.088 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:35.088 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:35.088 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=69404 00:12:35.088 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 69404 00:12:35.088 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:12:35.088 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 69404 ']' 00:12:35.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:35.088 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:35.088 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:35.088 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:35.088 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:35.088 23:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:35.347 [2024-11-18 23:56:41.885660] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:12:35.347 [2024-11-18 23:56:41.886113] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:35.606 [2024-11-18 23:56:42.078104] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:35.606 [2024-11-18 23:56:42.207252] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:35.606 [2024-11-18 23:56:42.207330] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:35.606 [2024-11-18 23:56:42.207365] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:35.606 [2024-11-18 23:56:42.207380] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:35.606 [2024-11-18 23:56:42.207396] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:35.606 [2024-11-18 23:56:42.209930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:12:35.606 [2024-11-18 23:56:42.210093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:12:35.606 [2024-11-18 23:56:42.210257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:12:35.606 [2024-11-18 23:56:42.210702] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:35.869 [2024-11-18 23:56:42.388903] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:36.183 23:56:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:36.183 23:56:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:12:36.183 23:56:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:36.183 23:56:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:36.183 23:56:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:36.183 23:56:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:36.183 23:56:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:36.183 23:56:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.183 23:56:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:36.183 [2024-11-18 23:56:42.827161] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:36.183 23:56:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.183 23:56:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:36.183 23:56:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.183 23:56:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:36.480 Malloc0 00:12:36.480 23:56:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.480 23:56:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:36.480 23:56:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.480 23:56:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:36.480 23:56:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.480 23:56:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:36.480 23:56:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.480 23:56:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:36.481 23:56:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.481 23:56:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:12:36.481 23:56:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.481 23:56:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:36.481 [2024-11-18 23:56:42.928997] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:36.481 23:56:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.481 23:56:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:12:36.481 23:56:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:12:36.481 23:56:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:12:36.481 23:56:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:12:36.481 23:56:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:12:36.481 23:56:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:12:36.481 { 00:12:36.481 "params": { 00:12:36.481 "name": "Nvme$subsystem", 00:12:36.481 "trtype": "$TEST_TRANSPORT", 00:12:36.481 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:36.481 "adrfam": "ipv4", 00:12:36.481 "trsvcid": "$NVMF_PORT", 00:12:36.481 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:36.481 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:36.481 "hdgst": ${hdgst:-false}, 00:12:36.481 "ddgst": ${ddgst:-false} 00:12:36.481 }, 00:12:36.481 "method": "bdev_nvme_attach_controller" 00:12:36.481 } 00:12:36.481 EOF 00:12:36.481 )") 00:12:36.481 23:56:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:12:36.481 23:56:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:12:36.481 23:56:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:12:36.481 23:56:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:12:36.481 "params": { 00:12:36.481 "name": "Nvme1", 00:12:36.481 "trtype": "tcp", 00:12:36.481 "traddr": "10.0.0.3", 00:12:36.481 "adrfam": "ipv4", 00:12:36.481 "trsvcid": "4420", 00:12:36.481 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:36.481 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:36.481 "hdgst": false, 00:12:36.481 "ddgst": false 00:12:36.481 }, 00:12:36.481 "method": "bdev_nvme_attach_controller" 00:12:36.481 }' 00:12:36.481 [2024-11-18 23:56:43.025325] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:12:36.481 [2024-11-18 23:56:43.025475] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69440 ] 00:12:36.739 [2024-11-18 23:56:43.195497] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:36.740 [2024-11-18 23:56:43.292406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:36.740 [2024-11-18 23:56:43.292515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:36.740 [2024-11-18 23:56:43.292540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:36.998 [2024-11-18 23:56:43.479318] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:36.998 I/O targets: 00:12:36.998 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:12:36.998 00:12:36.998 00:12:36.998 CUnit - A unit testing framework for C - Version 2.1-3 00:12:36.998 http://cunit.sourceforge.net/ 00:12:36.998 00:12:36.998 00:12:36.998 Suite: bdevio tests on: Nvme1n1 00:12:36.998 Test: blockdev write read block ...passed 00:12:36.998 Test: blockdev write zeroes read block ...passed 00:12:36.998 Test: blockdev write zeroes read no split ...passed 00:12:37.257 Test: blockdev write zeroes read split ...passed 00:12:37.257 Test: blockdev write zeroes read split partial ...passed 00:12:37.257 Test: blockdev reset ...[2024-11-18 23:56:43.744733] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:12:37.257 [2024-11-18 23:56:43.745168] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002b280 (9): Bad file descriptor 00:12:37.257 [2024-11-18 23:56:43.760129] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:12:37.257 passed 00:12:37.257 Test: blockdev write read 8 blocks ...passed 00:12:37.257 Test: blockdev write read size > 128k ...passed 00:12:37.257 Test: blockdev write read invalid size ...passed 00:12:37.257 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:37.257 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:37.257 Test: blockdev write read max offset ...passed 00:12:37.257 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:37.257 Test: blockdev writev readv 8 blocks ...passed 00:12:37.257 Test: blockdev writev readv 30 x 1block ...passed 00:12:37.257 Test: blockdev writev readv block ...passed 00:12:37.257 Test: blockdev writev readv size > 128k ...passed 00:12:37.257 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:37.257 Test: blockdev comparev and writev ...[2024-11-18 23:56:43.772941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:37.257 [2024-11-18 23:56:43.773224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:12:37.257 [2024-11-18 23:56:43.773361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:37.257 [2024-11-18 23:56:43.773463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:12:37.257 [2024-11-18 23:56:43.774134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:37.257 [2024-11-18 23:56:43.774279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:12:37.257 [2024-11-18 23:56:43.774377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:37.257 [2024-11-18 23:56:43.774588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:12:37.257 [2024-11-18 23:56:43.775114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:37.258 [2024-11-18 23:56:43.775341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:12:37.258 [2024-11-18 23:56:43.775571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:37.258 [2024-11-18 23:56:43.775851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:12:37.258 [2024-11-18 23:56:43.776503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:37.258 [2024-11-18 23:56:43.776726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:12:37.258 [2024-11-18 23:56:43.776962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:37.258 [2024-11-18 23:56:43.777196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:12:37.258 passed 00:12:37.258 Test: blockdev nvme passthru rw ...passed 00:12:37.258 Test: blockdev nvme passthru vendor specific ...[2024-11-18 23:56:43.778662] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:37.258 [2024-11-18 23:56:43.778898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:12:37.258 [2024-11-18 23:56:43.779212] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:37.258 [2024-11-18 23:56:43.779436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:12:37.258 [2024-11-18 23:56:43.779794] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:37.258 [2024-11-18 23:56:43.780036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:12:37.258 [2024-11-18 23:56:43.780412] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:37.258 [2024-11-18 23:56:43.780654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:12:37.258 passed 00:12:37.258 Test: blockdev nvme admin passthru ...passed 00:12:37.258 Test: blockdev copy ...passed 00:12:37.258 00:12:37.258 Run Summary: Type Total Ran Passed Failed Inactive 00:12:37.258 suites 1 1 n/a 0 0 00:12:37.258 tests 23 23 23 0 0 00:12:37.258 asserts 152 152 152 0 n/a 00:12:37.258 00:12:37.258 Elapsed time = 0.305 seconds 00:12:38.195 23:56:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:38.195 23:56:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.195 23:56:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:38.195 23:56:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.195 23:56:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:12:38.195 23:56:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:12:38.195 23:56:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:38.195 23:56:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:12:38.195 23:56:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:38.195 23:56:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:12:38.195 23:56:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:38.195 23:56:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:38.195 rmmod nvme_tcp 00:12:38.195 rmmod nvme_fabrics 00:12:38.195 rmmod nvme_keyring 00:12:38.195 23:56:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:38.195 23:56:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:12:38.195 23:56:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:12:38.195 23:56:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 69404 ']' 00:12:38.195 23:56:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 69404 00:12:38.195 23:56:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 69404 ']' 00:12:38.195 23:56:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 69404 00:12:38.195 23:56:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:12:38.195 23:56:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:38.196 23:56:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69404 00:12:38.196 killing process with pid 69404 00:12:38.196 23:56:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:12:38.196 23:56:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:12:38.196 23:56:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69404' 00:12:38.196 23:56:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 69404 00:12:38.196 23:56:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 69404 00:12:39.574 23:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:39.574 23:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:39.574 23:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:39.574 23:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:12:39.574 23:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:12:39.574 23:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:39.574 23:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:12:39.574 23:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:39.574 23:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:12:39.574 23:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:12:39.574 23:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:12:39.574 23:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:12:39.574 23:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:12:39.574 23:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:12:39.574 23:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:12:39.574 23:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:12:39.574 23:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:12:39.574 23:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:12:39.574 23:56:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:12:39.574 23:56:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:12:39.574 23:56:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:39.574 23:56:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:39.574 23:56:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@246 -- # remove_spdk_ns 00:12:39.574 23:56:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:39.574 23:56:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:39.574 23:56:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:39.574 23:56:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@300 -- # return 0 00:12:39.574 00:12:39.574 real 0m4.996s 00:12:39.574 user 0m18.060s 00:12:39.574 sys 0m1.004s 00:12:39.574 23:56:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:39.574 23:56:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:39.574 ************************************ 00:12:39.574 END TEST nvmf_bdevio 00:12:39.574 ************************************ 00:12:39.574 23:56:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:12:39.574 ************************************ 00:12:39.574 END TEST nvmf_target_core 00:12:39.574 ************************************ 00:12:39.574 00:12:39.574 real 2m55.733s 00:12:39.574 user 7m47.323s 00:12:39.574 sys 0m54.189s 00:12:39.574 23:56:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:39.574 23:56:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:39.574 23:56:46 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:12:39.574 23:56:46 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:39.574 23:56:46 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:39.574 23:56:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:39.574 ************************************ 00:12:39.574 START TEST nvmf_target_extra 00:12:39.574 ************************************ 00:12:39.574 23:56:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:12:39.833 * Looking for test storage... 00:12:39.833 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:12:39.833 23:56:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:39.833 23:56:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:12:39.833 23:56:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:39.833 23:56:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:39.833 23:56:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:39.833 23:56:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:39.833 23:56:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:39.833 23:56:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:12:39.833 23:56:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:12:39.833 23:56:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:12:39.833 23:56:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:12:39.833 23:56:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:12:39.833 23:56:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:12:39.833 23:56:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:12:39.833 23:56:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:39.833 23:56:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:12:39.833 23:56:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:12:39.833 23:56:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:39.833 23:56:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:39.833 23:56:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:12:39.833 23:56:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:12:39.833 23:56:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:39.833 23:56:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:12:39.833 23:56:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:12:39.833 23:56:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:12:39.833 23:56:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:12:39.833 23:56:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:39.833 23:56:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:12:39.833 23:56:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:12:39.833 23:56:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:39.833 23:56:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:39.833 23:56:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:12:39.833 23:56:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:39.833 23:56:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:39.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:39.833 --rc genhtml_branch_coverage=1 00:12:39.833 --rc genhtml_function_coverage=1 00:12:39.833 --rc genhtml_legend=1 00:12:39.833 --rc geninfo_all_blocks=1 00:12:39.833 --rc geninfo_unexecuted_blocks=1 00:12:39.833 00:12:39.833 ' 00:12:39.833 23:56:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:39.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:39.833 --rc genhtml_branch_coverage=1 00:12:39.833 --rc genhtml_function_coverage=1 00:12:39.833 --rc genhtml_legend=1 00:12:39.833 --rc geninfo_all_blocks=1 00:12:39.833 --rc geninfo_unexecuted_blocks=1 00:12:39.833 00:12:39.833 ' 00:12:39.833 23:56:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:39.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:39.834 --rc genhtml_branch_coverage=1 00:12:39.834 --rc genhtml_function_coverage=1 00:12:39.834 --rc genhtml_legend=1 00:12:39.834 --rc geninfo_all_blocks=1 00:12:39.834 --rc geninfo_unexecuted_blocks=1 00:12:39.834 00:12:39.834 ' 00:12:39.834 23:56:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:39.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:39.834 --rc genhtml_branch_coverage=1 00:12:39.834 --rc genhtml_function_coverage=1 00:12:39.834 --rc genhtml_legend=1 00:12:39.834 --rc geninfo_all_blocks=1 00:12:39.834 --rc geninfo_unexecuted_blocks=1 00:12:39.834 00:12:39.834 ' 00:12:39.834 23:56:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:39.834 23:56:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:12:39.834 23:56:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:39.834 23:56:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:39.834 23:56:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:39.834 23:56:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:39.834 23:56:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:39.834 23:56:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:39.834 23:56:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:39.834 23:56:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:39.834 23:56:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:39.834 23:56:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:39.834 23:56:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:12:39.834 23:56:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:12:39.834 23:56:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:39.834 23:56:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:39.834 23:56:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:39.834 23:56:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:39.834 23:56:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:39.834 23:56:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:12:39.834 23:56:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:39.834 23:56:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:39.834 23:56:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:39.834 23:56:46 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.834 23:56:46 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.834 23:56:46 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.834 23:56:46 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:12:39.834 23:56:46 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.834 23:56:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:12:39.834 23:56:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:39.834 23:56:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:39.834 23:56:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:39.834 23:56:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:39.834 23:56:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:39.834 23:56:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:39.834 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:39.834 23:56:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:39.834 23:56:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:39.834 23:56:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:39.834 23:56:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:12:39.834 23:56:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:12:39.834 23:56:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 1 -eq 0 ]] 00:12:39.834 23:56:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:12:39.834 23:56:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:39.834 23:56:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:39.834 23:56:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:39.834 ************************************ 00:12:39.834 START TEST nvmf_auth_target 00:12:39.834 ************************************ 00:12:39.834 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:12:39.834 * Looking for test storage... 00:12:39.834 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:40.093 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:40.093 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:12:40.093 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:40.093 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:40.093 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:40.093 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:40.093 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:40.094 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:12:40.094 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:12:40.094 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:12:40.094 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:12:40.094 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:12:40.094 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:12:40.094 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:12:40.094 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:40.094 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:12:40.094 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:12:40.094 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:40.094 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:40.094 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:12:40.094 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:12:40.094 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:40.094 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:12:40.094 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:12:40.094 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:12:40.094 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:12:40.094 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:40.094 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:12:40.094 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:12:40.094 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:40.094 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:40.094 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:12:40.094 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:40.094 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:40.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:40.094 --rc genhtml_branch_coverage=1 00:12:40.094 --rc genhtml_function_coverage=1 00:12:40.094 --rc genhtml_legend=1 00:12:40.094 --rc geninfo_all_blocks=1 00:12:40.094 --rc geninfo_unexecuted_blocks=1 00:12:40.094 00:12:40.094 ' 00:12:40.094 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:40.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:40.094 --rc genhtml_branch_coverage=1 00:12:40.094 --rc genhtml_function_coverage=1 00:12:40.094 --rc genhtml_legend=1 00:12:40.094 --rc geninfo_all_blocks=1 00:12:40.094 --rc geninfo_unexecuted_blocks=1 00:12:40.094 00:12:40.094 ' 00:12:40.094 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:40.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:40.094 --rc genhtml_branch_coverage=1 00:12:40.094 --rc genhtml_function_coverage=1 00:12:40.094 --rc genhtml_legend=1 00:12:40.094 --rc geninfo_all_blocks=1 00:12:40.094 --rc geninfo_unexecuted_blocks=1 00:12:40.094 00:12:40.094 ' 00:12:40.094 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:40.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:40.094 --rc genhtml_branch_coverage=1 00:12:40.094 --rc genhtml_function_coverage=1 00:12:40.094 --rc genhtml_legend=1 00:12:40.094 --rc geninfo_all_blocks=1 00:12:40.094 --rc geninfo_unexecuted_blocks=1 00:12:40.094 00:12:40.094 ' 00:12:40.094 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:40.094 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:12:40.094 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:40.094 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:40.094 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:40.094 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:40.094 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:40.094 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:40.094 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:40.094 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:40.094 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:40.094 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:40.094 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:12:40.094 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:12:40.094 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:40.094 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:40.094 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:40.094 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:40.094 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:40.094 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:12:40.094 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:40.094 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:40.094 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:40.094 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:40.094 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:40.094 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:40.094 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:12:40.094 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:40.094 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:12:40.094 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:40.094 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:40.094 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:40.094 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:40.094 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:40.094 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:40.094 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:40.094 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:40.094 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:40.095 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:40.095 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:12:40.095 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:12:40.095 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:12:40.095 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:12:40.095 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:12:40.095 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:12:40.095 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:12:40.095 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:12:40.095 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:40.095 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:40.095 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:40.095 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:40.095 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:40.095 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:40.095 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:40.095 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:40.095 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:12:40.095 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:12:40.095 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:12:40.095 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:12:40.095 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:12:40.095 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:12:40.095 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:40.095 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:40.095 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:40.095 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:40.095 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:40.095 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:40.095 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:40.095 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:40.095 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:40.095 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:40.095 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:40.095 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:40.095 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:40.095 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:40.095 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:40.095 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:40.095 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:40.095 Cannot find device "nvmf_init_br" 00:12:40.095 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:12:40.095 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:40.095 Cannot find device "nvmf_init_br2" 00:12:40.095 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:12:40.095 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:40.095 Cannot find device "nvmf_tgt_br" 00:12:40.095 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # true 00:12:40.095 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:40.095 Cannot find device "nvmf_tgt_br2" 00:12:40.095 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # true 00:12:40.095 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:40.095 Cannot find device "nvmf_init_br" 00:12:40.095 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # true 00:12:40.095 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:40.095 Cannot find device "nvmf_init_br2" 00:12:40.095 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # true 00:12:40.095 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:40.095 Cannot find device "nvmf_tgt_br" 00:12:40.095 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # true 00:12:40.095 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:40.095 Cannot find device "nvmf_tgt_br2" 00:12:40.095 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # true 00:12:40.095 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:40.095 Cannot find device "nvmf_br" 00:12:40.354 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # true 00:12:40.354 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:40.354 Cannot find device "nvmf_init_if" 00:12:40.354 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # true 00:12:40.354 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:40.354 Cannot find device "nvmf_init_if2" 00:12:40.354 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # true 00:12:40.354 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:40.354 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:40.354 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # true 00:12:40.354 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:40.354 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:40.354 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # true 00:12:40.354 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:40.354 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:40.354 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:40.354 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:40.354 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:40.354 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:40.354 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:40.354 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:40.354 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:40.354 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:40.354 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:40.354 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:40.354 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:40.354 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:40.354 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:40.354 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:40.354 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:40.354 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:40.354 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:40.354 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:40.354 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:40.354 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:40.354 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:40.354 23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:40.354 23:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:40.354 23:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:40.354 23:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:40.354 23:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:40.354 23:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:40.354 23:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:40.613 23:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:40.613 23:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:40.613 23:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:40.613 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:40.613 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.078 ms 00:12:40.613 00:12:40.613 --- 10.0.0.3 ping statistics --- 00:12:40.613 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:40.613 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:12:40.613 23:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:40.613 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:40.613 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:12:40.613 00:12:40.613 --- 10.0.0.4 ping statistics --- 00:12:40.613 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:40.613 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:12:40.613 23:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:40.613 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:40.613 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:12:40.613 00:12:40.613 --- 10.0.0.1 ping statistics --- 00:12:40.613 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:40.613 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:12:40.613 23:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:40.613 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:40.613 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:12:40.613 00:12:40.613 --- 10.0.0.2 ping statistics --- 00:12:40.613 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:40.613 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:12:40.613 23:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:40.613 23:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@461 -- # return 0 00:12:40.613 23:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:40.613 23:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:40.613 23:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:40.613 23:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:40.613 23:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:40.613 23:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:40.613 23:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:40.613 23:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:12:40.613 23:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:40.613 23:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:40.613 23:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.613 23:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=69771 00:12:40.613 23:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 69771 00:12:40.613 23:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 69771 ']' 00:12:40.613 23:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:40.613 23:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:40.613 23:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:40.613 23:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:12:40.613 23:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:40.613 23:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.549 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:41.549 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:12:41.549 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:41.549 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:41.549 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.809 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:41.809 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=69803 00:12:41.809 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:12:41.809 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:12:41.809 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:12:41.809 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:12:41.809 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:41.809 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:12:41.809 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:12:41.809 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:12:41.809 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:12:41.809 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=b8acf8b65a863f48feea267a9248936062b95146532c3759 00:12:41.809 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:12:41.809 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.SN3 00:12:41.809 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key b8acf8b65a863f48feea267a9248936062b95146532c3759 0 00:12:41.809 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 b8acf8b65a863f48feea267a9248936062b95146532c3759 0 00:12:41.809 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:12:41.809 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:12:41.809 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=b8acf8b65a863f48feea267a9248936062b95146532c3759 00:12:41.809 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:12:41.809 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:12:41.809 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.SN3 00:12:41.809 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.SN3 00:12:41.809 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.SN3 00:12:41.809 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:12:41.809 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:12:41.809 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:41.809 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:12:41.809 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:12:41.809 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:12:41.809 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:12:41.809 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=495c35a748c742ff61cc4329adf8874fe91dd5becb6acc18bd524e2928c553eb 00:12:41.809 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:12:41.809 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.sYk 00:12:41.809 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 495c35a748c742ff61cc4329adf8874fe91dd5becb6acc18bd524e2928c553eb 3 00:12:41.809 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 495c35a748c742ff61cc4329adf8874fe91dd5becb6acc18bd524e2928c553eb 3 00:12:41.809 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:12:41.809 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:12:41.809 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=495c35a748c742ff61cc4329adf8874fe91dd5becb6acc18bd524e2928c553eb 00:12:41.809 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:12:41.809 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:12:41.809 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.sYk 00:12:41.809 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.sYk 00:12:41.809 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.sYk 00:12:41.810 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:12:41.810 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:12:41.810 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:41.810 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:12:41.810 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:12:41.810 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:12:41.810 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:12:41.810 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=72dd6f046e13dc26a931d437d3efe0ae 00:12:41.810 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:12:41.810 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.kGi 00:12:41.810 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 72dd6f046e13dc26a931d437d3efe0ae 1 00:12:41.810 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 72dd6f046e13dc26a931d437d3efe0ae 1 00:12:41.810 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:12:41.810 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:12:41.810 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=72dd6f046e13dc26a931d437d3efe0ae 00:12:41.810 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:12:41.810 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:12:41.810 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.kGi 00:12:41.810 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.kGi 00:12:41.810 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.kGi 00:12:41.810 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:12:41.810 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:12:41.810 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:41.810 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:12:41.810 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:12:41.810 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:12:41.810 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:12:41.810 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=9365f18aa8c5ae918e0a60f9aab36c47eb60d5eaac36b28b 00:12:41.810 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:12:41.810 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.dhO 00:12:41.810 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 9365f18aa8c5ae918e0a60f9aab36c47eb60d5eaac36b28b 2 00:12:41.810 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 9365f18aa8c5ae918e0a60f9aab36c47eb60d5eaac36b28b 2 00:12:41.810 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:12:41.810 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:12:41.810 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=9365f18aa8c5ae918e0a60f9aab36c47eb60d5eaac36b28b 00:12:41.810 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:12:41.810 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:12:41.810 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.dhO 00:12:41.810 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.dhO 00:12:41.810 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.dhO 00:12:41.810 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:12:41.810 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:12:41.810 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:41.810 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:12:41.810 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:12:41.810 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:12:41.810 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:12:41.810 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=b5b21ca984ae45282d8d0f83517c1af23e224090608ccdbf 00:12:41.810 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:12:42.069 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.t92 00:12:42.069 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key b5b21ca984ae45282d8d0f83517c1af23e224090608ccdbf 2 00:12:42.069 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 b5b21ca984ae45282d8d0f83517c1af23e224090608ccdbf 2 00:12:42.069 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:12:42.069 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:12:42.069 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=b5b21ca984ae45282d8d0f83517c1af23e224090608ccdbf 00:12:42.069 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:12:42.069 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:12:42.069 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.t92 00:12:42.069 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.t92 00:12:42.069 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.t92 00:12:42.069 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:12:42.069 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:12:42.069 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:42.069 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:12:42.069 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:12:42.069 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:12:42.069 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:12:42.069 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=a2cb49d2f1605da522c7582aa63bd4ea 00:12:42.069 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:12:42.069 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.s9h 00:12:42.069 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key a2cb49d2f1605da522c7582aa63bd4ea 1 00:12:42.069 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 a2cb49d2f1605da522c7582aa63bd4ea 1 00:12:42.069 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:12:42.069 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:12:42.069 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=a2cb49d2f1605da522c7582aa63bd4ea 00:12:42.069 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:12:42.069 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:12:42.069 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.s9h 00:12:42.069 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.s9h 00:12:42.069 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.s9h 00:12:42.069 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:12:42.069 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:12:42.069 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:42.069 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:12:42.069 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:12:42.069 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:12:42.069 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:12:42.069 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=650c0511c78ea78665107c0d2d348f98d836c766967424b6c31da42578d153b1 00:12:42.069 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:12:42.069 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.xLH 00:12:42.069 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 650c0511c78ea78665107c0d2d348f98d836c766967424b6c31da42578d153b1 3 00:12:42.069 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 650c0511c78ea78665107c0d2d348f98d836c766967424b6c31da42578d153b1 3 00:12:42.069 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:12:42.069 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:12:42.069 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=650c0511c78ea78665107c0d2d348f98d836c766967424b6c31da42578d153b1 00:12:42.069 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:12:42.069 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:12:42.069 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.xLH 00:12:42.069 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.xLH 00:12:42.069 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.xLH 00:12:42.069 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:12:42.069 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 69771 00:12:42.069 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 69771 ']' 00:12:42.069 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:42.069 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:42.069 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:42.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:42.069 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:42.069 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.328 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:42.328 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:12:42.328 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 69803 /var/tmp/host.sock 00:12:42.328 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 69803 ']' 00:12:42.328 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:12:42.328 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:42.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:12:42.328 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:12:42.328 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:42.328 23:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.913 23:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:42.913 23:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:12:42.913 23:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:12:42.913 23:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.913 23:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.913 23:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.913 23:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:12:42.913 23:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.SN3 00:12:42.913 23:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.913 23:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.913 23:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.913 23:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.SN3 00:12:42.913 23:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.SN3 00:12:43.170 23:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.sYk ]] 00:12:43.170 23:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.sYk 00:12:43.170 23:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.170 23:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.170 23:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.171 23:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.sYk 00:12:43.171 23:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.sYk 00:12:43.429 23:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:12:43.429 23:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.kGi 00:12:43.429 23:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.429 23:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.429 23:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.429 23:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.kGi 00:12:43.429 23:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.kGi 00:12:43.687 23:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.dhO ]] 00:12:43.687 23:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.dhO 00:12:43.687 23:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.687 23:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.687 23:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.687 23:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.dhO 00:12:43.687 23:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.dhO 00:12:43.945 23:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:12:43.945 23:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.t92 00:12:43.945 23:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.945 23:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.945 23:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.946 23:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.t92 00:12:43.946 23:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.t92 00:12:44.204 23:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.s9h ]] 00:12:44.204 23:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.s9h 00:12:44.204 23:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.204 23:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.204 23:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.204 23:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.s9h 00:12:44.204 23:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.s9h 00:12:44.463 23:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:12:44.463 23:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.xLH 00:12:44.463 23:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.463 23:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.463 23:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.463 23:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.xLH 00:12:44.463 23:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.xLH 00:12:44.721 23:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:12:44.721 23:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:12:44.721 23:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:44.721 23:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:44.721 23:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:44.721 23:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:44.980 23:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:12:44.980 23:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:44.980 23:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:44.980 23:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:44.980 23:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:44.980 23:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:44.980 23:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:44.980 23:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.980 23:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.980 23:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.980 23:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:44.980 23:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:44.980 23:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:45.238 00:12:45.238 23:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:45.238 23:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:45.238 23:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:45.496 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:45.496 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:45.496 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.496 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:45.496 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.496 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:45.496 { 00:12:45.496 "cntlid": 1, 00:12:45.496 "qid": 0, 00:12:45.496 "state": "enabled", 00:12:45.496 "thread": "nvmf_tgt_poll_group_000", 00:12:45.496 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a", 00:12:45.496 "listen_address": { 00:12:45.496 "trtype": "TCP", 00:12:45.496 "adrfam": "IPv4", 00:12:45.496 "traddr": "10.0.0.3", 00:12:45.496 "trsvcid": "4420" 00:12:45.496 }, 00:12:45.496 "peer_address": { 00:12:45.496 "trtype": "TCP", 00:12:45.496 "adrfam": "IPv4", 00:12:45.496 "traddr": "10.0.0.1", 00:12:45.496 "trsvcid": "58566" 00:12:45.496 }, 00:12:45.496 "auth": { 00:12:45.496 "state": "completed", 00:12:45.496 "digest": "sha256", 00:12:45.496 "dhgroup": "null" 00:12:45.496 } 00:12:45.496 } 00:12:45.496 ]' 00:12:45.496 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:45.496 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:45.496 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:45.756 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:45.756 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:45.756 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:45.756 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:45.756 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:46.014 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjhhY2Y4YjY1YTg2M2Y0OGZlZWEyNjdhOTI0ODkzNjA2MmI5NTE0NjUzMmMzNzU5krlUGw==: --dhchap-ctrl-secret DHHC-1:03:NDk1YzM1YTc0OGM3NDJmZjYxY2M0MzI5YWRmODg3NGZlOTFkZDViZWNiNmFjYzE4YmQ1MjRlMjkyOGM1NTNlYvZPfj4=: 00:12:46.014 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --hostid 2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -l 0 --dhchap-secret DHHC-1:00:YjhhY2Y4YjY1YTg2M2Y0OGZlZWEyNjdhOTI0ODkzNjA2MmI5NTE0NjUzMmMzNzU5krlUGw==: --dhchap-ctrl-secret DHHC-1:03:NDk1YzM1YTc0OGM3NDJmZjYxY2M0MzI5YWRmODg3NGZlOTFkZDViZWNiNmFjYzE4YmQ1MjRlMjkyOGM1NTNlYvZPfj4=: 00:12:50.198 23:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:50.198 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:50.199 23:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:12:50.199 23:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.199 23:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.199 23:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.199 23:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:50.199 23:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:50.199 23:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:50.457 23:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:12:50.457 23:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:50.457 23:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:50.457 23:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:50.457 23:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:50.457 23:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:50.457 23:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:50.457 23:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.457 23:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.457 23:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.457 23:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:50.457 23:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:50.457 23:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:50.715 00:12:50.715 23:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:50.715 23:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:50.715 23:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:50.974 23:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:50.974 23:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:50.974 23:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.974 23:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.974 23:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.974 23:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:50.974 { 00:12:50.974 "cntlid": 3, 00:12:50.974 "qid": 0, 00:12:50.974 "state": "enabled", 00:12:50.974 "thread": "nvmf_tgt_poll_group_000", 00:12:50.974 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a", 00:12:50.974 "listen_address": { 00:12:50.974 "trtype": "TCP", 00:12:50.974 "adrfam": "IPv4", 00:12:50.974 "traddr": "10.0.0.3", 00:12:50.974 "trsvcid": "4420" 00:12:50.974 }, 00:12:50.974 "peer_address": { 00:12:50.974 "trtype": "TCP", 00:12:50.974 "adrfam": "IPv4", 00:12:50.974 "traddr": "10.0.0.1", 00:12:50.974 "trsvcid": "58600" 00:12:50.974 }, 00:12:50.974 "auth": { 00:12:50.974 "state": "completed", 00:12:50.974 "digest": "sha256", 00:12:50.974 "dhgroup": "null" 00:12:50.974 } 00:12:50.974 } 00:12:50.974 ]' 00:12:50.974 23:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:50.974 23:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:50.974 23:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:51.232 23:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:51.232 23:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:51.232 23:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:51.232 23:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:51.232 23:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:51.490 23:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzJkZDZmMDQ2ZTEzZGMyNmE5MzFkNDM3ZDNlZmUwYWUQ1o38: --dhchap-ctrl-secret DHHC-1:02:OTM2NWYxOGFhOGM1YWU5MThlMGE2MGY5YWFiMzZjNDdlYjYwZDVlYWFjMzZiMjhiETNs8g==: 00:12:51.490 23:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --hostid 2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -l 0 --dhchap-secret DHHC-1:01:NzJkZDZmMDQ2ZTEzZGMyNmE5MzFkNDM3ZDNlZmUwYWUQ1o38: --dhchap-ctrl-secret DHHC-1:02:OTM2NWYxOGFhOGM1YWU5MThlMGE2MGY5YWFiMzZjNDdlYjYwZDVlYWFjMzZiMjhiETNs8g==: 00:12:52.055 23:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:52.311 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:52.311 23:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:12:52.311 23:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.311 23:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:52.311 23:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.311 23:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:52.311 23:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:52.312 23:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:52.569 23:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:12:52.569 23:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:52.569 23:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:52.569 23:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:52.569 23:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:52.569 23:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:52.569 23:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:52.569 23:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.569 23:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:52.569 23:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.569 23:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:52.569 23:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:52.569 23:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:52.826 00:12:52.826 23:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:52.826 23:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:52.826 23:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:53.082 23:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:53.082 23:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:53.082 23:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.082 23:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.082 23:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.082 23:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:53.082 { 00:12:53.082 "cntlid": 5, 00:12:53.082 "qid": 0, 00:12:53.082 "state": "enabled", 00:12:53.082 "thread": "nvmf_tgt_poll_group_000", 00:12:53.082 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a", 00:12:53.082 "listen_address": { 00:12:53.082 "trtype": "TCP", 00:12:53.082 "adrfam": "IPv4", 00:12:53.082 "traddr": "10.0.0.3", 00:12:53.082 "trsvcid": "4420" 00:12:53.082 }, 00:12:53.082 "peer_address": { 00:12:53.082 "trtype": "TCP", 00:12:53.082 "adrfam": "IPv4", 00:12:53.082 "traddr": "10.0.0.1", 00:12:53.082 "trsvcid": "58644" 00:12:53.082 }, 00:12:53.082 "auth": { 00:12:53.082 "state": "completed", 00:12:53.082 "digest": "sha256", 00:12:53.082 "dhgroup": "null" 00:12:53.082 } 00:12:53.082 } 00:12:53.082 ]' 00:12:53.082 23:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:53.340 23:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:53.340 23:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:53.340 23:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:53.340 23:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:53.340 23:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:53.340 23:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:53.340 23:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:53.611 23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjViMjFjYTk4NGFlNDUyODJkOGQwZjgzNTE3YzFhZjIzZTIyNDA5MDYwOGNjZGJmVA9WuA==: --dhchap-ctrl-secret DHHC-1:01:YTJjYjQ5ZDJmMTYwNWRhNTIyYzc1ODJhYTYzYmQ0ZWHxbK6/: 00:12:53.611 23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --hostid 2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -l 0 --dhchap-secret DHHC-1:02:YjViMjFjYTk4NGFlNDUyODJkOGQwZjgzNTE3YzFhZjIzZTIyNDA5MDYwOGNjZGJmVA9WuA==: --dhchap-ctrl-secret DHHC-1:01:YTJjYjQ5ZDJmMTYwNWRhNTIyYzc1ODJhYTYzYmQ0ZWHxbK6/: 00:12:54.188 23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:54.188 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:54.188 23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:12:54.188 23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.188 23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.188 23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.188 23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:54.188 23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:54.188 23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:54.446 23:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:12:54.446 23:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:54.446 23:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:54.446 23:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:54.446 23:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:54.446 23:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:54.446 23:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --dhchap-key key3 00:12:54.446 23:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.446 23:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.446 23:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.446 23:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:54.446 23:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:54.446 23:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:55.011 00:12:55.011 23:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:55.012 23:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:55.012 23:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:55.012 23:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:55.012 23:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:55.012 23:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.012 23:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:55.270 23:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.270 23:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:55.270 { 00:12:55.270 "cntlid": 7, 00:12:55.270 "qid": 0, 00:12:55.270 "state": "enabled", 00:12:55.270 "thread": "nvmf_tgt_poll_group_000", 00:12:55.270 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a", 00:12:55.270 "listen_address": { 00:12:55.270 "trtype": "TCP", 00:12:55.270 "adrfam": "IPv4", 00:12:55.270 "traddr": "10.0.0.3", 00:12:55.270 "trsvcid": "4420" 00:12:55.270 }, 00:12:55.270 "peer_address": { 00:12:55.270 "trtype": "TCP", 00:12:55.270 "adrfam": "IPv4", 00:12:55.270 "traddr": "10.0.0.1", 00:12:55.270 "trsvcid": "45426" 00:12:55.270 }, 00:12:55.270 "auth": { 00:12:55.270 "state": "completed", 00:12:55.270 "digest": "sha256", 00:12:55.270 "dhgroup": "null" 00:12:55.270 } 00:12:55.270 } 00:12:55.270 ]' 00:12:55.270 23:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:55.270 23:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:55.270 23:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:55.270 23:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:55.270 23:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:55.270 23:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:55.270 23:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:55.270 23:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:55.528 23:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjUwYzA1MTFjNzhlYTc4NjY1MTA3YzBkMmQzNDhmOThkODM2Yzc2Njk2NzQyNGI2YzMxZGE0MjU3OGQxNTNiMQdxwhM=: 00:12:55.528 23:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --hostid 2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -l 0 --dhchap-secret DHHC-1:03:NjUwYzA1MTFjNzhlYTc4NjY1MTA3YzBkMmQzNDhmOThkODM2Yzc2Njk2NzQyNGI2YzMxZGE0MjU3OGQxNTNiMQdxwhM=: 00:12:56.095 23:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:56.095 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:56.095 23:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:12:56.095 23:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.095 23:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:56.353 23:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.353 23:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:56.353 23:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:56.353 23:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:56.353 23:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:56.611 23:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:12:56.611 23:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:56.611 23:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:56.611 23:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:56.611 23:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:56.611 23:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:56.611 23:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:56.611 23:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.611 23:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:56.611 23:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.611 23:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:56.611 23:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:56.611 23:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:56.869 00:12:56.869 23:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:56.869 23:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:56.869 23:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:57.127 23:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:57.127 23:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:57.127 23:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.127 23:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:57.127 23:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.127 23:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:57.127 { 00:12:57.127 "cntlid": 9, 00:12:57.127 "qid": 0, 00:12:57.127 "state": "enabled", 00:12:57.127 "thread": "nvmf_tgt_poll_group_000", 00:12:57.127 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a", 00:12:57.127 "listen_address": { 00:12:57.127 "trtype": "TCP", 00:12:57.127 "adrfam": "IPv4", 00:12:57.127 "traddr": "10.0.0.3", 00:12:57.127 "trsvcid": "4420" 00:12:57.127 }, 00:12:57.127 "peer_address": { 00:12:57.127 "trtype": "TCP", 00:12:57.127 "adrfam": "IPv4", 00:12:57.127 "traddr": "10.0.0.1", 00:12:57.127 "trsvcid": "45444" 00:12:57.127 }, 00:12:57.127 "auth": { 00:12:57.127 "state": "completed", 00:12:57.127 "digest": "sha256", 00:12:57.127 "dhgroup": "ffdhe2048" 00:12:57.127 } 00:12:57.127 } 00:12:57.127 ]' 00:12:57.127 23:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:57.127 23:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:57.127 23:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:57.385 23:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:57.385 23:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:57.385 23:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:57.385 23:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:57.385 23:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:57.643 23:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjhhY2Y4YjY1YTg2M2Y0OGZlZWEyNjdhOTI0ODkzNjA2MmI5NTE0NjUzMmMzNzU5krlUGw==: --dhchap-ctrl-secret DHHC-1:03:NDk1YzM1YTc0OGM3NDJmZjYxY2M0MzI5YWRmODg3NGZlOTFkZDViZWNiNmFjYzE4YmQ1MjRlMjkyOGM1NTNlYvZPfj4=: 00:12:57.643 23:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --hostid 2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -l 0 --dhchap-secret DHHC-1:00:YjhhY2Y4YjY1YTg2M2Y0OGZlZWEyNjdhOTI0ODkzNjA2MmI5NTE0NjUzMmMzNzU5krlUGw==: --dhchap-ctrl-secret DHHC-1:03:NDk1YzM1YTc0OGM3NDJmZjYxY2M0MzI5YWRmODg3NGZlOTFkZDViZWNiNmFjYzE4YmQ1MjRlMjkyOGM1NTNlYvZPfj4=: 00:12:58.208 23:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:58.209 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:58.209 23:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:12:58.209 23:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.209 23:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.209 23:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.209 23:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:58.209 23:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:58.209 23:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:58.468 23:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:12:58.468 23:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:58.468 23:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:58.468 23:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:58.468 23:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:58.468 23:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:58.468 23:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:58.468 23:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.468 23:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.468 23:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.468 23:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:58.468 23:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:58.468 23:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:59.034 00:12:59.034 23:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:59.034 23:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:59.034 23:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:59.292 23:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:59.292 23:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:59.292 23:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.292 23:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:59.292 23:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.292 23:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:59.292 { 00:12:59.292 "cntlid": 11, 00:12:59.292 "qid": 0, 00:12:59.292 "state": "enabled", 00:12:59.292 "thread": "nvmf_tgt_poll_group_000", 00:12:59.292 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a", 00:12:59.292 "listen_address": { 00:12:59.292 "trtype": "TCP", 00:12:59.292 "adrfam": "IPv4", 00:12:59.292 "traddr": "10.0.0.3", 00:12:59.292 "trsvcid": "4420" 00:12:59.292 }, 00:12:59.292 "peer_address": { 00:12:59.292 "trtype": "TCP", 00:12:59.292 "adrfam": "IPv4", 00:12:59.292 "traddr": "10.0.0.1", 00:12:59.292 "trsvcid": "45452" 00:12:59.292 }, 00:12:59.292 "auth": { 00:12:59.292 "state": "completed", 00:12:59.292 "digest": "sha256", 00:12:59.292 "dhgroup": "ffdhe2048" 00:12:59.292 } 00:12:59.292 } 00:12:59.292 ]' 00:12:59.292 23:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:59.292 23:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:59.292 23:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:59.292 23:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:59.292 23:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:59.292 23:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:59.292 23:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:59.292 23:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:59.550 23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzJkZDZmMDQ2ZTEzZGMyNmE5MzFkNDM3ZDNlZmUwYWUQ1o38: --dhchap-ctrl-secret DHHC-1:02:OTM2NWYxOGFhOGM1YWU5MThlMGE2MGY5YWFiMzZjNDdlYjYwZDVlYWFjMzZiMjhiETNs8g==: 00:12:59.550 23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --hostid 2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -l 0 --dhchap-secret DHHC-1:01:NzJkZDZmMDQ2ZTEzZGMyNmE5MzFkNDM3ZDNlZmUwYWUQ1o38: --dhchap-ctrl-secret DHHC-1:02:OTM2NWYxOGFhOGM1YWU5MThlMGE2MGY5YWFiMzZjNDdlYjYwZDVlYWFjMzZiMjhiETNs8g==: 00:13:00.485 23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:00.485 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:00.485 23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:13:00.485 23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.485 23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.485 23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.485 23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:00.485 23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:00.485 23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:00.743 23:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:13:00.743 23:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:00.743 23:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:00.743 23:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:00.743 23:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:00.743 23:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:00.743 23:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:00.743 23:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.743 23:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.743 23:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.743 23:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:00.743 23:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:00.743 23:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:01.001 00:13:01.001 23:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:01.001 23:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:01.001 23:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:01.259 23:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:01.259 23:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:01.259 23:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.259 23:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.259 23:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.259 23:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:01.259 { 00:13:01.259 "cntlid": 13, 00:13:01.259 "qid": 0, 00:13:01.259 "state": "enabled", 00:13:01.259 "thread": "nvmf_tgt_poll_group_000", 00:13:01.259 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a", 00:13:01.259 "listen_address": { 00:13:01.259 "trtype": "TCP", 00:13:01.259 "adrfam": "IPv4", 00:13:01.259 "traddr": "10.0.0.3", 00:13:01.259 "trsvcid": "4420" 00:13:01.259 }, 00:13:01.259 "peer_address": { 00:13:01.259 "trtype": "TCP", 00:13:01.259 "adrfam": "IPv4", 00:13:01.259 "traddr": "10.0.0.1", 00:13:01.259 "trsvcid": "45464" 00:13:01.259 }, 00:13:01.259 "auth": { 00:13:01.259 "state": "completed", 00:13:01.259 "digest": "sha256", 00:13:01.259 "dhgroup": "ffdhe2048" 00:13:01.259 } 00:13:01.259 } 00:13:01.259 ]' 00:13:01.259 23:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:01.259 23:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:01.259 23:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:01.518 23:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:01.518 23:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:01.518 23:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:01.518 23:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:01.518 23:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:01.775 23:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjViMjFjYTk4NGFlNDUyODJkOGQwZjgzNTE3YzFhZjIzZTIyNDA5MDYwOGNjZGJmVA9WuA==: --dhchap-ctrl-secret DHHC-1:01:YTJjYjQ5ZDJmMTYwNWRhNTIyYzc1ODJhYTYzYmQ0ZWHxbK6/: 00:13:01.775 23:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --hostid 2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -l 0 --dhchap-secret DHHC-1:02:YjViMjFjYTk4NGFlNDUyODJkOGQwZjgzNTE3YzFhZjIzZTIyNDA5MDYwOGNjZGJmVA9WuA==: --dhchap-ctrl-secret DHHC-1:01:YTJjYjQ5ZDJmMTYwNWRhNTIyYzc1ODJhYTYzYmQ0ZWHxbK6/: 00:13:02.342 23:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:02.342 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:02.342 23:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:13:02.342 23:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.342 23:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:02.342 23:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.342 23:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:02.342 23:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:02.342 23:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:02.600 23:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:13:02.600 23:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:02.600 23:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:02.600 23:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:02.600 23:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:02.600 23:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:02.600 23:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --dhchap-key key3 00:13:02.600 23:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.600 23:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:02.600 23:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.600 23:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:02.600 23:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:02.600 23:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:03.165 00:13:03.165 23:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:03.165 23:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:03.165 23:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:03.165 23:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:03.165 23:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:03.165 23:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.165 23:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:03.165 23:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.165 23:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:03.165 { 00:13:03.165 "cntlid": 15, 00:13:03.165 "qid": 0, 00:13:03.165 "state": "enabled", 00:13:03.165 "thread": "nvmf_tgt_poll_group_000", 00:13:03.165 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a", 00:13:03.165 "listen_address": { 00:13:03.165 "trtype": "TCP", 00:13:03.165 "adrfam": "IPv4", 00:13:03.165 "traddr": "10.0.0.3", 00:13:03.165 "trsvcid": "4420" 00:13:03.165 }, 00:13:03.165 "peer_address": { 00:13:03.165 "trtype": "TCP", 00:13:03.165 "adrfam": "IPv4", 00:13:03.165 "traddr": "10.0.0.1", 00:13:03.165 "trsvcid": "41386" 00:13:03.165 }, 00:13:03.165 "auth": { 00:13:03.165 "state": "completed", 00:13:03.165 "digest": "sha256", 00:13:03.165 "dhgroup": "ffdhe2048" 00:13:03.165 } 00:13:03.165 } 00:13:03.165 ]' 00:13:03.165 23:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:03.423 23:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:03.423 23:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:03.423 23:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:03.423 23:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:03.423 23:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:03.423 23:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:03.423 23:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:03.681 23:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjUwYzA1MTFjNzhlYTc4NjY1MTA3YzBkMmQzNDhmOThkODM2Yzc2Njk2NzQyNGI2YzMxZGE0MjU3OGQxNTNiMQdxwhM=: 00:13:03.681 23:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --hostid 2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -l 0 --dhchap-secret DHHC-1:03:NjUwYzA1MTFjNzhlYTc4NjY1MTA3YzBkMmQzNDhmOThkODM2Yzc2Njk2NzQyNGI2YzMxZGE0MjU3OGQxNTNiMQdxwhM=: 00:13:04.247 23:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:04.247 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:04.247 23:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:13:04.247 23:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.247 23:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:04.247 23:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.247 23:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:04.247 23:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:04.247 23:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:04.247 23:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:04.506 23:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:13:04.506 23:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:04.506 23:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:04.506 23:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:04.506 23:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:04.506 23:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:04.506 23:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:04.506 23:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.506 23:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:04.506 23:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.506 23:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:04.506 23:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:04.506 23:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:05.073 00:13:05.073 23:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:05.073 23:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:05.073 23:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:05.332 23:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:05.332 23:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:05.332 23:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.332 23:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:05.332 23:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.332 23:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:05.332 { 00:13:05.332 "cntlid": 17, 00:13:05.332 "qid": 0, 00:13:05.332 "state": "enabled", 00:13:05.332 "thread": "nvmf_tgt_poll_group_000", 00:13:05.332 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a", 00:13:05.332 "listen_address": { 00:13:05.332 "trtype": "TCP", 00:13:05.332 "adrfam": "IPv4", 00:13:05.332 "traddr": "10.0.0.3", 00:13:05.332 "trsvcid": "4420" 00:13:05.332 }, 00:13:05.332 "peer_address": { 00:13:05.332 "trtype": "TCP", 00:13:05.332 "adrfam": "IPv4", 00:13:05.332 "traddr": "10.0.0.1", 00:13:05.332 "trsvcid": "41416" 00:13:05.332 }, 00:13:05.332 "auth": { 00:13:05.332 "state": "completed", 00:13:05.332 "digest": "sha256", 00:13:05.332 "dhgroup": "ffdhe3072" 00:13:05.332 } 00:13:05.332 } 00:13:05.332 ]' 00:13:05.332 23:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:05.332 23:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:05.332 23:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:05.332 23:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:05.332 23:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:05.332 23:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:05.332 23:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:05.332 23:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:05.590 23:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjhhY2Y4YjY1YTg2M2Y0OGZlZWEyNjdhOTI0ODkzNjA2MmI5NTE0NjUzMmMzNzU5krlUGw==: --dhchap-ctrl-secret DHHC-1:03:NDk1YzM1YTc0OGM3NDJmZjYxY2M0MzI5YWRmODg3NGZlOTFkZDViZWNiNmFjYzE4YmQ1MjRlMjkyOGM1NTNlYvZPfj4=: 00:13:05.591 23:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --hostid 2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -l 0 --dhchap-secret DHHC-1:00:YjhhY2Y4YjY1YTg2M2Y0OGZlZWEyNjdhOTI0ODkzNjA2MmI5NTE0NjUzMmMzNzU5krlUGw==: --dhchap-ctrl-secret DHHC-1:03:NDk1YzM1YTc0OGM3NDJmZjYxY2M0MzI5YWRmODg3NGZlOTFkZDViZWNiNmFjYzE4YmQ1MjRlMjkyOGM1NTNlYvZPfj4=: 00:13:06.544 23:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:06.544 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:06.544 23:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:13:06.544 23:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.544 23:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.544 23:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.544 23:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:06.544 23:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:06.544 23:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:06.895 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:13:06.895 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:06.895 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:06.895 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:06.895 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:06.895 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:06.895 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:06.895 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.895 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.895 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.895 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:06.895 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:06.895 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:07.155 00:13:07.155 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:07.155 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:07.155 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:07.414 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:07.414 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:07.414 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.414 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.414 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.414 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:07.414 { 00:13:07.414 "cntlid": 19, 00:13:07.414 "qid": 0, 00:13:07.414 "state": "enabled", 00:13:07.414 "thread": "nvmf_tgt_poll_group_000", 00:13:07.414 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a", 00:13:07.414 "listen_address": { 00:13:07.414 "trtype": "TCP", 00:13:07.414 "adrfam": "IPv4", 00:13:07.414 "traddr": "10.0.0.3", 00:13:07.414 "trsvcid": "4420" 00:13:07.414 }, 00:13:07.414 "peer_address": { 00:13:07.414 "trtype": "TCP", 00:13:07.414 "adrfam": "IPv4", 00:13:07.414 "traddr": "10.0.0.1", 00:13:07.414 "trsvcid": "41434" 00:13:07.414 }, 00:13:07.414 "auth": { 00:13:07.414 "state": "completed", 00:13:07.414 "digest": "sha256", 00:13:07.414 "dhgroup": "ffdhe3072" 00:13:07.414 } 00:13:07.414 } 00:13:07.414 ]' 00:13:07.414 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:07.414 23:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:07.414 23:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:07.414 23:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:07.414 23:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:07.673 23:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:07.673 23:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:07.673 23:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:07.932 23:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzJkZDZmMDQ2ZTEzZGMyNmE5MzFkNDM3ZDNlZmUwYWUQ1o38: --dhchap-ctrl-secret DHHC-1:02:OTM2NWYxOGFhOGM1YWU5MThlMGE2MGY5YWFiMzZjNDdlYjYwZDVlYWFjMzZiMjhiETNs8g==: 00:13:07.932 23:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --hostid 2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -l 0 --dhchap-secret DHHC-1:01:NzJkZDZmMDQ2ZTEzZGMyNmE5MzFkNDM3ZDNlZmUwYWUQ1o38: --dhchap-ctrl-secret DHHC-1:02:OTM2NWYxOGFhOGM1YWU5MThlMGE2MGY5YWFiMzZjNDdlYjYwZDVlYWFjMzZiMjhiETNs8g==: 00:13:08.499 23:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:08.499 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:08.499 23:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:13:08.499 23:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.499 23:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.499 23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.499 23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:08.499 23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:08.499 23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:08.757 23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:13:08.757 23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:08.757 23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:08.757 23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:08.757 23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:08.757 23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:08.757 23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:08.757 23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.757 23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.757 23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.757 23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:08.757 23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:08.757 23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:09.016 00:13:09.016 23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:09.016 23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:09.016 23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:09.274 23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:09.274 23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:09.274 23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.274 23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:09.274 23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.274 23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:09.274 { 00:13:09.274 "cntlid": 21, 00:13:09.275 "qid": 0, 00:13:09.275 "state": "enabled", 00:13:09.275 "thread": "nvmf_tgt_poll_group_000", 00:13:09.275 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a", 00:13:09.275 "listen_address": { 00:13:09.275 "trtype": "TCP", 00:13:09.275 "adrfam": "IPv4", 00:13:09.275 "traddr": "10.0.0.3", 00:13:09.275 "trsvcid": "4420" 00:13:09.275 }, 00:13:09.275 "peer_address": { 00:13:09.275 "trtype": "TCP", 00:13:09.275 "adrfam": "IPv4", 00:13:09.275 "traddr": "10.0.0.1", 00:13:09.275 "trsvcid": "41456" 00:13:09.275 }, 00:13:09.275 "auth": { 00:13:09.275 "state": "completed", 00:13:09.275 "digest": "sha256", 00:13:09.275 "dhgroup": "ffdhe3072" 00:13:09.275 } 00:13:09.275 } 00:13:09.275 ]' 00:13:09.275 23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:09.275 23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:09.275 23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:09.534 23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:09.534 23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:09.534 23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:09.534 23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:09.534 23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:09.792 23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjViMjFjYTk4NGFlNDUyODJkOGQwZjgzNTE3YzFhZjIzZTIyNDA5MDYwOGNjZGJmVA9WuA==: --dhchap-ctrl-secret DHHC-1:01:YTJjYjQ5ZDJmMTYwNWRhNTIyYzc1ODJhYTYzYmQ0ZWHxbK6/: 00:13:09.792 23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --hostid 2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -l 0 --dhchap-secret DHHC-1:02:YjViMjFjYTk4NGFlNDUyODJkOGQwZjgzNTE3YzFhZjIzZTIyNDA5MDYwOGNjZGJmVA9WuA==: --dhchap-ctrl-secret DHHC-1:01:YTJjYjQ5ZDJmMTYwNWRhNTIyYzc1ODJhYTYzYmQ0ZWHxbK6/: 00:13:10.359 23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:10.359 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:10.359 23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:13:10.359 23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.359 23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.359 23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.359 23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:10.359 23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:10.359 23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:10.618 23:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:13:10.618 23:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:10.618 23:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:10.618 23:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:10.618 23:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:10.618 23:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:10.618 23:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --dhchap-key key3 00:13:10.618 23:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.618 23:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.618 23:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.618 23:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:10.618 23:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:10.618 23:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:10.877 00:13:10.877 23:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:10.877 23:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:10.877 23:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:11.135 23:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:11.135 23:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:11.135 23:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.135 23:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:11.135 23:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.135 23:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:11.135 { 00:13:11.135 "cntlid": 23, 00:13:11.135 "qid": 0, 00:13:11.135 "state": "enabled", 00:13:11.135 "thread": "nvmf_tgt_poll_group_000", 00:13:11.135 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a", 00:13:11.135 "listen_address": { 00:13:11.136 "trtype": "TCP", 00:13:11.136 "adrfam": "IPv4", 00:13:11.136 "traddr": "10.0.0.3", 00:13:11.136 "trsvcid": "4420" 00:13:11.136 }, 00:13:11.136 "peer_address": { 00:13:11.136 "trtype": "TCP", 00:13:11.136 "adrfam": "IPv4", 00:13:11.136 "traddr": "10.0.0.1", 00:13:11.136 "trsvcid": "41478" 00:13:11.136 }, 00:13:11.136 "auth": { 00:13:11.136 "state": "completed", 00:13:11.136 "digest": "sha256", 00:13:11.136 "dhgroup": "ffdhe3072" 00:13:11.136 } 00:13:11.136 } 00:13:11.136 ]' 00:13:11.136 23:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:11.395 23:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:11.395 23:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:11.395 23:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:11.395 23:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:11.395 23:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:11.395 23:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:11.395 23:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:11.653 23:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjUwYzA1MTFjNzhlYTc4NjY1MTA3YzBkMmQzNDhmOThkODM2Yzc2Njk2NzQyNGI2YzMxZGE0MjU3OGQxNTNiMQdxwhM=: 00:13:11.653 23:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --hostid 2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -l 0 --dhchap-secret DHHC-1:03:NjUwYzA1MTFjNzhlYTc4NjY1MTA3YzBkMmQzNDhmOThkODM2Yzc2Njk2NzQyNGI2YzMxZGE0MjU3OGQxNTNiMQdxwhM=: 00:13:12.220 23:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:12.220 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:12.220 23:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:13:12.220 23:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.220 23:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.220 23:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.220 23:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:12.220 23:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:12.220 23:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:12.220 23:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:12.479 23:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:13:12.479 23:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:12.479 23:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:12.479 23:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:12.479 23:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:12.479 23:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:12.479 23:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:12.479 23:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.479 23:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.479 23:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.479 23:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:12.479 23:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:12.479 23:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:13.047 00:13:13.047 23:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:13.047 23:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:13.047 23:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:13.306 23:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:13.306 23:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:13.306 23:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.306 23:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:13.306 23:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.306 23:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:13.306 { 00:13:13.306 "cntlid": 25, 00:13:13.306 "qid": 0, 00:13:13.306 "state": "enabled", 00:13:13.306 "thread": "nvmf_tgt_poll_group_000", 00:13:13.306 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a", 00:13:13.306 "listen_address": { 00:13:13.306 "trtype": "TCP", 00:13:13.306 "adrfam": "IPv4", 00:13:13.306 "traddr": "10.0.0.3", 00:13:13.306 "trsvcid": "4420" 00:13:13.306 }, 00:13:13.306 "peer_address": { 00:13:13.306 "trtype": "TCP", 00:13:13.306 "adrfam": "IPv4", 00:13:13.306 "traddr": "10.0.0.1", 00:13:13.306 "trsvcid": "54322" 00:13:13.306 }, 00:13:13.306 "auth": { 00:13:13.306 "state": "completed", 00:13:13.306 "digest": "sha256", 00:13:13.306 "dhgroup": "ffdhe4096" 00:13:13.306 } 00:13:13.306 } 00:13:13.306 ]' 00:13:13.306 23:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:13.306 23:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:13.306 23:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:13.306 23:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:13.306 23:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:13.306 23:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:13.306 23:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:13.306 23:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:13.564 23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjhhY2Y4YjY1YTg2M2Y0OGZlZWEyNjdhOTI0ODkzNjA2MmI5NTE0NjUzMmMzNzU5krlUGw==: --dhchap-ctrl-secret DHHC-1:03:NDk1YzM1YTc0OGM3NDJmZjYxY2M0MzI5YWRmODg3NGZlOTFkZDViZWNiNmFjYzE4YmQ1MjRlMjkyOGM1NTNlYvZPfj4=: 00:13:13.564 23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --hostid 2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -l 0 --dhchap-secret DHHC-1:00:YjhhY2Y4YjY1YTg2M2Y0OGZlZWEyNjdhOTI0ODkzNjA2MmI5NTE0NjUzMmMzNzU5krlUGw==: --dhchap-ctrl-secret DHHC-1:03:NDk1YzM1YTc0OGM3NDJmZjYxY2M0MzI5YWRmODg3NGZlOTFkZDViZWNiNmFjYzE4YmQ1MjRlMjkyOGM1NTNlYvZPfj4=: 00:13:14.499 23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:14.499 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:14.499 23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:13:14.499 23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.499 23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.499 23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.499 23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:14.499 23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:14.499 23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:14.499 23:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:13:14.499 23:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:14.499 23:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:14.499 23:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:14.499 23:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:14.499 23:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:14.499 23:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:14.499 23:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.499 23:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.499 23:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.499 23:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:14.499 23:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:14.499 23:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:15.066 00:13:15.066 23:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:15.066 23:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:15.066 23:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:15.324 23:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:15.324 23:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:15.324 23:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.324 23:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:15.324 23:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.324 23:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:15.324 { 00:13:15.324 "cntlid": 27, 00:13:15.324 "qid": 0, 00:13:15.324 "state": "enabled", 00:13:15.324 "thread": "nvmf_tgt_poll_group_000", 00:13:15.324 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a", 00:13:15.324 "listen_address": { 00:13:15.324 "trtype": "TCP", 00:13:15.324 "adrfam": "IPv4", 00:13:15.324 "traddr": "10.0.0.3", 00:13:15.324 "trsvcid": "4420" 00:13:15.324 }, 00:13:15.324 "peer_address": { 00:13:15.324 "trtype": "TCP", 00:13:15.324 "adrfam": "IPv4", 00:13:15.324 "traddr": "10.0.0.1", 00:13:15.324 "trsvcid": "54346" 00:13:15.324 }, 00:13:15.324 "auth": { 00:13:15.324 "state": "completed", 00:13:15.324 "digest": "sha256", 00:13:15.324 "dhgroup": "ffdhe4096" 00:13:15.324 } 00:13:15.324 } 00:13:15.324 ]' 00:13:15.324 23:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:15.324 23:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:15.324 23:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:15.324 23:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:15.324 23:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:15.324 23:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:15.324 23:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:15.324 23:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:15.583 23:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzJkZDZmMDQ2ZTEzZGMyNmE5MzFkNDM3ZDNlZmUwYWUQ1o38: --dhchap-ctrl-secret DHHC-1:02:OTM2NWYxOGFhOGM1YWU5MThlMGE2MGY5YWFiMzZjNDdlYjYwZDVlYWFjMzZiMjhiETNs8g==: 00:13:15.583 23:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --hostid 2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -l 0 --dhchap-secret DHHC-1:01:NzJkZDZmMDQ2ZTEzZGMyNmE5MzFkNDM3ZDNlZmUwYWUQ1o38: --dhchap-ctrl-secret DHHC-1:02:OTM2NWYxOGFhOGM1YWU5MThlMGE2MGY5YWFiMzZjNDdlYjYwZDVlYWFjMzZiMjhiETNs8g==: 00:13:16.519 23:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:16.519 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:16.519 23:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:13:16.519 23:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.519 23:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.519 23:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.519 23:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:16.519 23:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:16.519 23:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:16.519 23:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:13:16.519 23:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:16.519 23:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:16.519 23:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:16.519 23:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:16.519 23:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:16.519 23:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:16.519 23:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.519 23:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.519 23:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.519 23:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:16.519 23:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:16.519 23:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:17.087 00:13:17.087 23:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:17.087 23:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:17.087 23:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:17.087 23:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:17.087 23:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:17.087 23:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.087 23:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.345 23:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.345 23:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:17.345 { 00:13:17.345 "cntlid": 29, 00:13:17.345 "qid": 0, 00:13:17.345 "state": "enabled", 00:13:17.345 "thread": "nvmf_tgt_poll_group_000", 00:13:17.345 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a", 00:13:17.345 "listen_address": { 00:13:17.345 "trtype": "TCP", 00:13:17.345 "adrfam": "IPv4", 00:13:17.345 "traddr": "10.0.0.3", 00:13:17.345 "trsvcid": "4420" 00:13:17.345 }, 00:13:17.345 "peer_address": { 00:13:17.345 "trtype": "TCP", 00:13:17.345 "adrfam": "IPv4", 00:13:17.345 "traddr": "10.0.0.1", 00:13:17.345 "trsvcid": "54364" 00:13:17.345 }, 00:13:17.345 "auth": { 00:13:17.345 "state": "completed", 00:13:17.345 "digest": "sha256", 00:13:17.345 "dhgroup": "ffdhe4096" 00:13:17.345 } 00:13:17.345 } 00:13:17.345 ]' 00:13:17.345 23:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:17.345 23:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:17.345 23:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:17.345 23:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:17.345 23:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:17.345 23:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:17.346 23:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:17.346 23:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:17.604 23:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjViMjFjYTk4NGFlNDUyODJkOGQwZjgzNTE3YzFhZjIzZTIyNDA5MDYwOGNjZGJmVA9WuA==: --dhchap-ctrl-secret DHHC-1:01:YTJjYjQ5ZDJmMTYwNWRhNTIyYzc1ODJhYTYzYmQ0ZWHxbK6/: 00:13:17.604 23:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --hostid 2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -l 0 --dhchap-secret DHHC-1:02:YjViMjFjYTk4NGFlNDUyODJkOGQwZjgzNTE3YzFhZjIzZTIyNDA5MDYwOGNjZGJmVA9WuA==: --dhchap-ctrl-secret DHHC-1:01:YTJjYjQ5ZDJmMTYwNWRhNTIyYzc1ODJhYTYzYmQ0ZWHxbK6/: 00:13:18.172 23:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:18.172 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:18.172 23:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:13:18.172 23:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.172 23:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.172 23:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.172 23:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:18.172 23:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:18.172 23:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:18.431 23:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:13:18.431 23:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:18.431 23:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:18.431 23:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:18.431 23:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:18.431 23:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:18.431 23:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --dhchap-key key3 00:13:18.431 23:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.431 23:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.431 23:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.431 23:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:18.431 23:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:18.431 23:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:18.997 00:13:18.997 23:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:18.997 23:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:18.997 23:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:19.256 23:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:19.256 23:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:19.256 23:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.256 23:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.256 23:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.256 23:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:19.256 { 00:13:19.256 "cntlid": 31, 00:13:19.256 "qid": 0, 00:13:19.256 "state": "enabled", 00:13:19.256 "thread": "nvmf_tgt_poll_group_000", 00:13:19.256 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a", 00:13:19.256 "listen_address": { 00:13:19.256 "trtype": "TCP", 00:13:19.256 "adrfam": "IPv4", 00:13:19.256 "traddr": "10.0.0.3", 00:13:19.256 "trsvcid": "4420" 00:13:19.256 }, 00:13:19.256 "peer_address": { 00:13:19.256 "trtype": "TCP", 00:13:19.256 "adrfam": "IPv4", 00:13:19.256 "traddr": "10.0.0.1", 00:13:19.256 "trsvcid": "54384" 00:13:19.256 }, 00:13:19.256 "auth": { 00:13:19.256 "state": "completed", 00:13:19.256 "digest": "sha256", 00:13:19.256 "dhgroup": "ffdhe4096" 00:13:19.256 } 00:13:19.256 } 00:13:19.256 ]' 00:13:19.256 23:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:19.256 23:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:19.256 23:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:19.256 23:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:19.256 23:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:19.256 23:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:19.256 23:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:19.256 23:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:19.515 23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjUwYzA1MTFjNzhlYTc4NjY1MTA3YzBkMmQzNDhmOThkODM2Yzc2Njk2NzQyNGI2YzMxZGE0MjU3OGQxNTNiMQdxwhM=: 00:13:19.515 23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --hostid 2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -l 0 --dhchap-secret DHHC-1:03:NjUwYzA1MTFjNzhlYTc4NjY1MTA3YzBkMmQzNDhmOThkODM2Yzc2Njk2NzQyNGI2YzMxZGE0MjU3OGQxNTNiMQdxwhM=: 00:13:20.082 23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:20.082 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:20.082 23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:13:20.082 23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.082 23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.082 23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.082 23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:20.082 23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:20.082 23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:20.082 23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:20.668 23:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:13:20.668 23:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:20.668 23:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:20.668 23:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:20.668 23:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:20.668 23:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:20.668 23:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:20.668 23:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.668 23:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.668 23:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.668 23:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:20.668 23:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:20.668 23:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:20.928 00:13:20.928 23:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:20.928 23:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:20.928 23:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:21.187 23:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:21.187 23:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:21.187 23:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.187 23:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:21.187 23:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.187 23:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:21.187 { 00:13:21.187 "cntlid": 33, 00:13:21.187 "qid": 0, 00:13:21.187 "state": "enabled", 00:13:21.187 "thread": "nvmf_tgt_poll_group_000", 00:13:21.187 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a", 00:13:21.187 "listen_address": { 00:13:21.187 "trtype": "TCP", 00:13:21.187 "adrfam": "IPv4", 00:13:21.187 "traddr": "10.0.0.3", 00:13:21.187 "trsvcid": "4420" 00:13:21.187 }, 00:13:21.187 "peer_address": { 00:13:21.187 "trtype": "TCP", 00:13:21.187 "adrfam": "IPv4", 00:13:21.187 "traddr": "10.0.0.1", 00:13:21.187 "trsvcid": "54404" 00:13:21.187 }, 00:13:21.187 "auth": { 00:13:21.187 "state": "completed", 00:13:21.187 "digest": "sha256", 00:13:21.187 "dhgroup": "ffdhe6144" 00:13:21.187 } 00:13:21.187 } 00:13:21.187 ]' 00:13:21.187 23:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:21.187 23:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:21.187 23:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:21.445 23:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:21.445 23:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:21.445 23:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:21.445 23:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:21.445 23:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:21.704 23:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjhhY2Y4YjY1YTg2M2Y0OGZlZWEyNjdhOTI0ODkzNjA2MmI5NTE0NjUzMmMzNzU5krlUGw==: --dhchap-ctrl-secret DHHC-1:03:NDk1YzM1YTc0OGM3NDJmZjYxY2M0MzI5YWRmODg3NGZlOTFkZDViZWNiNmFjYzE4YmQ1MjRlMjkyOGM1NTNlYvZPfj4=: 00:13:21.704 23:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --hostid 2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -l 0 --dhchap-secret DHHC-1:00:YjhhY2Y4YjY1YTg2M2Y0OGZlZWEyNjdhOTI0ODkzNjA2MmI5NTE0NjUzMmMzNzU5krlUGw==: --dhchap-ctrl-secret DHHC-1:03:NDk1YzM1YTc0OGM3NDJmZjYxY2M0MzI5YWRmODg3NGZlOTFkZDViZWNiNmFjYzE4YmQ1MjRlMjkyOGM1NTNlYvZPfj4=: 00:13:22.271 23:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:22.272 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:22.272 23:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:13:22.272 23:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.272 23:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:22.272 23:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.272 23:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:22.272 23:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:22.272 23:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:22.530 23:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:13:22.530 23:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:22.530 23:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:22.530 23:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:22.530 23:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:22.530 23:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:22.530 23:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:22.530 23:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.530 23:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:22.530 23:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.530 23:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:22.530 23:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:22.530 23:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:23.097 00:13:23.097 23:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:23.097 23:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:23.097 23:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:23.356 23:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:23.356 23:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:23.356 23:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.356 23:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:23.356 23:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.356 23:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:23.356 { 00:13:23.356 "cntlid": 35, 00:13:23.356 "qid": 0, 00:13:23.356 "state": "enabled", 00:13:23.356 "thread": "nvmf_tgt_poll_group_000", 00:13:23.356 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a", 00:13:23.356 "listen_address": { 00:13:23.356 "trtype": "TCP", 00:13:23.356 "adrfam": "IPv4", 00:13:23.356 "traddr": "10.0.0.3", 00:13:23.356 "trsvcid": "4420" 00:13:23.356 }, 00:13:23.356 "peer_address": { 00:13:23.356 "trtype": "TCP", 00:13:23.356 "adrfam": "IPv4", 00:13:23.356 "traddr": "10.0.0.1", 00:13:23.356 "trsvcid": "54788" 00:13:23.356 }, 00:13:23.356 "auth": { 00:13:23.356 "state": "completed", 00:13:23.356 "digest": "sha256", 00:13:23.356 "dhgroup": "ffdhe6144" 00:13:23.356 } 00:13:23.356 } 00:13:23.356 ]' 00:13:23.356 23:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:23.356 23:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:23.356 23:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:23.356 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:23.356 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:23.613 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:23.613 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:23.613 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:23.870 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzJkZDZmMDQ2ZTEzZGMyNmE5MzFkNDM3ZDNlZmUwYWUQ1o38: --dhchap-ctrl-secret DHHC-1:02:OTM2NWYxOGFhOGM1YWU5MThlMGE2MGY5YWFiMzZjNDdlYjYwZDVlYWFjMzZiMjhiETNs8g==: 00:13:23.870 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --hostid 2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -l 0 --dhchap-secret DHHC-1:01:NzJkZDZmMDQ2ZTEzZGMyNmE5MzFkNDM3ZDNlZmUwYWUQ1o38: --dhchap-ctrl-secret DHHC-1:02:OTM2NWYxOGFhOGM1YWU5MThlMGE2MGY5YWFiMzZjNDdlYjYwZDVlYWFjMzZiMjhiETNs8g==: 00:13:24.437 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:24.437 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:24.437 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:13:24.437 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.437 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:24.437 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.437 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:24.437 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:24.437 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:24.696 23:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:13:24.696 23:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:24.696 23:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:24.696 23:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:24.696 23:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:24.696 23:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:24.696 23:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:24.696 23:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.696 23:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:24.696 23:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.696 23:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:24.696 23:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:24.696 23:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:25.263 00:13:25.263 23:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:25.263 23:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:25.263 23:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:25.263 23:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:25.263 23:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:25.263 23:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.263 23:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:25.523 23:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.523 23:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:25.523 { 00:13:25.523 "cntlid": 37, 00:13:25.523 "qid": 0, 00:13:25.523 "state": "enabled", 00:13:25.523 "thread": "nvmf_tgt_poll_group_000", 00:13:25.523 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a", 00:13:25.523 "listen_address": { 00:13:25.523 "trtype": "TCP", 00:13:25.523 "adrfam": "IPv4", 00:13:25.523 "traddr": "10.0.0.3", 00:13:25.523 "trsvcid": "4420" 00:13:25.523 }, 00:13:25.523 "peer_address": { 00:13:25.523 "trtype": "TCP", 00:13:25.523 "adrfam": "IPv4", 00:13:25.523 "traddr": "10.0.0.1", 00:13:25.523 "trsvcid": "54820" 00:13:25.523 }, 00:13:25.523 "auth": { 00:13:25.523 "state": "completed", 00:13:25.523 "digest": "sha256", 00:13:25.523 "dhgroup": "ffdhe6144" 00:13:25.523 } 00:13:25.523 } 00:13:25.523 ]' 00:13:25.523 23:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:25.523 23:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:25.523 23:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:25.523 23:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:25.523 23:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:25.523 23:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:25.523 23:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:25.523 23:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:25.782 23:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjViMjFjYTk4NGFlNDUyODJkOGQwZjgzNTE3YzFhZjIzZTIyNDA5MDYwOGNjZGJmVA9WuA==: --dhchap-ctrl-secret DHHC-1:01:YTJjYjQ5ZDJmMTYwNWRhNTIyYzc1ODJhYTYzYmQ0ZWHxbK6/: 00:13:25.782 23:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --hostid 2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -l 0 --dhchap-secret DHHC-1:02:YjViMjFjYTk4NGFlNDUyODJkOGQwZjgzNTE3YzFhZjIzZTIyNDA5MDYwOGNjZGJmVA9WuA==: --dhchap-ctrl-secret DHHC-1:01:YTJjYjQ5ZDJmMTYwNWRhNTIyYzc1ODJhYTYzYmQ0ZWHxbK6/: 00:13:26.350 23:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:26.350 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:26.350 23:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:13:26.350 23:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.350 23:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:26.350 23:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.350 23:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:26.350 23:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:26.350 23:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:26.609 23:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:13:26.609 23:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:26.610 23:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:26.610 23:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:26.610 23:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:26.610 23:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:26.610 23:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --dhchap-key key3 00:13:26.610 23:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.610 23:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:26.610 23:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.610 23:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:26.610 23:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:26.610 23:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:27.177 00:13:27.177 23:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:27.177 23:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:27.177 23:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:27.435 23:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:27.435 23:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:27.435 23:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.435 23:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:27.435 23:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.435 23:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:27.435 { 00:13:27.435 "cntlid": 39, 00:13:27.435 "qid": 0, 00:13:27.435 "state": "enabled", 00:13:27.435 "thread": "nvmf_tgt_poll_group_000", 00:13:27.435 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a", 00:13:27.435 "listen_address": { 00:13:27.435 "trtype": "TCP", 00:13:27.435 "adrfam": "IPv4", 00:13:27.435 "traddr": "10.0.0.3", 00:13:27.435 "trsvcid": "4420" 00:13:27.435 }, 00:13:27.435 "peer_address": { 00:13:27.435 "trtype": "TCP", 00:13:27.435 "adrfam": "IPv4", 00:13:27.435 "traddr": "10.0.0.1", 00:13:27.435 "trsvcid": "54842" 00:13:27.435 }, 00:13:27.435 "auth": { 00:13:27.435 "state": "completed", 00:13:27.435 "digest": "sha256", 00:13:27.435 "dhgroup": "ffdhe6144" 00:13:27.435 } 00:13:27.435 } 00:13:27.435 ]' 00:13:27.435 23:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:27.435 23:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:27.435 23:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:27.435 23:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:27.435 23:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:27.693 23:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:27.693 23:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:27.693 23:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:27.952 23:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjUwYzA1MTFjNzhlYTc4NjY1MTA3YzBkMmQzNDhmOThkODM2Yzc2Njk2NzQyNGI2YzMxZGE0MjU3OGQxNTNiMQdxwhM=: 00:13:27.952 23:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --hostid 2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -l 0 --dhchap-secret DHHC-1:03:NjUwYzA1MTFjNzhlYTc4NjY1MTA3YzBkMmQzNDhmOThkODM2Yzc2Njk2NzQyNGI2YzMxZGE0MjU3OGQxNTNiMQdxwhM=: 00:13:28.518 23:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:28.518 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:28.518 23:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:13:28.518 23:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.518 23:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:28.518 23:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.518 23:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:28.518 23:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:28.518 23:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:28.518 23:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:28.777 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:13:28.778 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:28.778 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:28.778 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:28.778 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:28.778 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:28.778 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:28.778 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.778 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:28.778 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.778 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:28.778 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:28.778 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:29.345 00:13:29.345 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:29.345 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:29.345 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:29.345 23:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:29.345 23:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:29.345 23:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.345 23:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.604 23:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.604 23:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:29.604 { 00:13:29.604 "cntlid": 41, 00:13:29.604 "qid": 0, 00:13:29.604 "state": "enabled", 00:13:29.604 "thread": "nvmf_tgt_poll_group_000", 00:13:29.604 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a", 00:13:29.604 "listen_address": { 00:13:29.604 "trtype": "TCP", 00:13:29.604 "adrfam": "IPv4", 00:13:29.604 "traddr": "10.0.0.3", 00:13:29.604 "trsvcid": "4420" 00:13:29.604 }, 00:13:29.604 "peer_address": { 00:13:29.604 "trtype": "TCP", 00:13:29.604 "adrfam": "IPv4", 00:13:29.604 "traddr": "10.0.0.1", 00:13:29.604 "trsvcid": "54874" 00:13:29.604 }, 00:13:29.604 "auth": { 00:13:29.604 "state": "completed", 00:13:29.604 "digest": "sha256", 00:13:29.604 "dhgroup": "ffdhe8192" 00:13:29.604 } 00:13:29.604 } 00:13:29.604 ]' 00:13:29.604 23:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:29.604 23:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:29.604 23:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:29.604 23:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:29.604 23:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:29.604 23:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:29.604 23:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:29.604 23:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:29.863 23:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjhhY2Y4YjY1YTg2M2Y0OGZlZWEyNjdhOTI0ODkzNjA2MmI5NTE0NjUzMmMzNzU5krlUGw==: --dhchap-ctrl-secret DHHC-1:03:NDk1YzM1YTc0OGM3NDJmZjYxY2M0MzI5YWRmODg3NGZlOTFkZDViZWNiNmFjYzE4YmQ1MjRlMjkyOGM1NTNlYvZPfj4=: 00:13:29.863 23:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --hostid 2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -l 0 --dhchap-secret DHHC-1:00:YjhhY2Y4YjY1YTg2M2Y0OGZlZWEyNjdhOTI0ODkzNjA2MmI5NTE0NjUzMmMzNzU5krlUGw==: --dhchap-ctrl-secret DHHC-1:03:NDk1YzM1YTc0OGM3NDJmZjYxY2M0MzI5YWRmODg3NGZlOTFkZDViZWNiNmFjYzE4YmQ1MjRlMjkyOGM1NTNlYvZPfj4=: 00:13:30.800 23:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:30.800 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:30.800 23:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:13:30.800 23:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.800 23:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:30.800 23:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.800 23:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:30.800 23:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:30.800 23:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:31.059 23:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:13:31.059 23:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:31.059 23:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:31.059 23:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:31.059 23:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:31.059 23:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:31.059 23:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:31.059 23:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.059 23:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.059 23:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.059 23:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:31.059 23:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:31.059 23:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:31.625 00:13:31.626 23:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:31.626 23:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:31.626 23:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:31.884 23:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:31.884 23:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:31.884 23:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.884 23:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.884 23:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.884 23:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:31.884 { 00:13:31.884 "cntlid": 43, 00:13:31.884 "qid": 0, 00:13:31.884 "state": "enabled", 00:13:31.884 "thread": "nvmf_tgt_poll_group_000", 00:13:31.884 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a", 00:13:31.884 "listen_address": { 00:13:31.884 "trtype": "TCP", 00:13:31.884 "adrfam": "IPv4", 00:13:31.884 "traddr": "10.0.0.3", 00:13:31.884 "trsvcid": "4420" 00:13:31.884 }, 00:13:31.884 "peer_address": { 00:13:31.884 "trtype": "TCP", 00:13:31.884 "adrfam": "IPv4", 00:13:31.884 "traddr": "10.0.0.1", 00:13:31.884 "trsvcid": "54902" 00:13:31.884 }, 00:13:31.884 "auth": { 00:13:31.884 "state": "completed", 00:13:31.884 "digest": "sha256", 00:13:31.884 "dhgroup": "ffdhe8192" 00:13:31.884 } 00:13:31.884 } 00:13:31.884 ]' 00:13:31.884 23:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:31.884 23:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:31.884 23:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:31.884 23:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:31.884 23:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:31.884 23:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:31.884 23:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:31.884 23:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:32.142 23:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzJkZDZmMDQ2ZTEzZGMyNmE5MzFkNDM3ZDNlZmUwYWUQ1o38: --dhchap-ctrl-secret DHHC-1:02:OTM2NWYxOGFhOGM1YWU5MThlMGE2MGY5YWFiMzZjNDdlYjYwZDVlYWFjMzZiMjhiETNs8g==: 00:13:32.143 23:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --hostid 2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -l 0 --dhchap-secret DHHC-1:01:NzJkZDZmMDQ2ZTEzZGMyNmE5MzFkNDM3ZDNlZmUwYWUQ1o38: --dhchap-ctrl-secret DHHC-1:02:OTM2NWYxOGFhOGM1YWU5MThlMGE2MGY5YWFiMzZjNDdlYjYwZDVlYWFjMzZiMjhiETNs8g==: 00:13:33.076 23:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:33.076 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:33.076 23:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:13:33.076 23:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.076 23:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.076 23:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.076 23:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:33.076 23:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:33.076 23:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:33.335 23:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:13:33.335 23:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:33.335 23:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:33.335 23:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:33.335 23:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:33.335 23:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:33.335 23:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:33.335 23:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.335 23:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.335 23:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.335 23:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:33.335 23:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:33.335 23:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:33.902 00:13:33.902 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:33.902 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:33.902 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:34.169 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:34.169 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:34.169 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.169 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.169 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.169 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:34.169 { 00:13:34.169 "cntlid": 45, 00:13:34.169 "qid": 0, 00:13:34.169 "state": "enabled", 00:13:34.169 "thread": "nvmf_tgt_poll_group_000", 00:13:34.169 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a", 00:13:34.169 "listen_address": { 00:13:34.169 "trtype": "TCP", 00:13:34.169 "adrfam": "IPv4", 00:13:34.169 "traddr": "10.0.0.3", 00:13:34.169 "trsvcid": "4420" 00:13:34.169 }, 00:13:34.169 "peer_address": { 00:13:34.169 "trtype": "TCP", 00:13:34.169 "adrfam": "IPv4", 00:13:34.169 "traddr": "10.0.0.1", 00:13:34.169 "trsvcid": "39386" 00:13:34.169 }, 00:13:34.169 "auth": { 00:13:34.169 "state": "completed", 00:13:34.169 "digest": "sha256", 00:13:34.169 "dhgroup": "ffdhe8192" 00:13:34.169 } 00:13:34.169 } 00:13:34.169 ]' 00:13:34.169 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:34.169 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:34.169 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:34.169 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:34.169 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:34.440 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:34.440 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:34.440 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:34.700 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjViMjFjYTk4NGFlNDUyODJkOGQwZjgzNTE3YzFhZjIzZTIyNDA5MDYwOGNjZGJmVA9WuA==: --dhchap-ctrl-secret DHHC-1:01:YTJjYjQ5ZDJmMTYwNWRhNTIyYzc1ODJhYTYzYmQ0ZWHxbK6/: 00:13:34.700 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --hostid 2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -l 0 --dhchap-secret DHHC-1:02:YjViMjFjYTk4NGFlNDUyODJkOGQwZjgzNTE3YzFhZjIzZTIyNDA5MDYwOGNjZGJmVA9WuA==: --dhchap-ctrl-secret DHHC-1:01:YTJjYjQ5ZDJmMTYwNWRhNTIyYzc1ODJhYTYzYmQ0ZWHxbK6/: 00:13:35.268 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:35.268 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:35.268 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:13:35.268 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.269 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.269 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.269 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:35.269 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:35.269 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:35.528 23:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:13:35.528 23:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:35.528 23:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:35.528 23:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:35.528 23:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:35.528 23:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:35.528 23:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --dhchap-key key3 00:13:35.528 23:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.528 23:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.528 23:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.528 23:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:35.528 23:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:35.528 23:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:36.095 00:13:36.095 23:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:36.095 23:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:36.095 23:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:36.663 23:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:36.663 23:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:36.663 23:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.663 23:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:36.663 23:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.663 23:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:36.663 { 00:13:36.663 "cntlid": 47, 00:13:36.663 "qid": 0, 00:13:36.663 "state": "enabled", 00:13:36.663 "thread": "nvmf_tgt_poll_group_000", 00:13:36.663 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a", 00:13:36.663 "listen_address": { 00:13:36.663 "trtype": "TCP", 00:13:36.663 "adrfam": "IPv4", 00:13:36.663 "traddr": "10.0.0.3", 00:13:36.663 "trsvcid": "4420" 00:13:36.663 }, 00:13:36.663 "peer_address": { 00:13:36.663 "trtype": "TCP", 00:13:36.663 "adrfam": "IPv4", 00:13:36.663 "traddr": "10.0.0.1", 00:13:36.663 "trsvcid": "39406" 00:13:36.663 }, 00:13:36.663 "auth": { 00:13:36.663 "state": "completed", 00:13:36.663 "digest": "sha256", 00:13:36.663 "dhgroup": "ffdhe8192" 00:13:36.663 } 00:13:36.663 } 00:13:36.663 ]' 00:13:36.663 23:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:36.663 23:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:36.663 23:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:36.663 23:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:36.663 23:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:36.663 23:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:36.663 23:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:36.663 23:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:36.922 23:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjUwYzA1MTFjNzhlYTc4NjY1MTA3YzBkMmQzNDhmOThkODM2Yzc2Njk2NzQyNGI2YzMxZGE0MjU3OGQxNTNiMQdxwhM=: 00:13:36.922 23:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --hostid 2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -l 0 --dhchap-secret DHHC-1:03:NjUwYzA1MTFjNzhlYTc4NjY1MTA3YzBkMmQzNDhmOThkODM2Yzc2Njk2NzQyNGI2YzMxZGE0MjU3OGQxNTNiMQdxwhM=: 00:13:37.490 23:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:37.490 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:37.490 23:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:13:37.490 23:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.490 23:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.490 23:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.490 23:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:13:37.490 23:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:37.490 23:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:37.490 23:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:37.490 23:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:38.058 23:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:13:38.058 23:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:38.058 23:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:38.058 23:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:38.058 23:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:38.058 23:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:38.058 23:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:38.058 23:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.058 23:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.058 23:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.058 23:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:38.058 23:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:38.059 23:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:38.317 00:13:38.317 23:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:38.317 23:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:38.317 23:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:38.576 23:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:38.576 23:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:38.576 23:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.576 23:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.576 23:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.576 23:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:38.576 { 00:13:38.576 "cntlid": 49, 00:13:38.576 "qid": 0, 00:13:38.576 "state": "enabled", 00:13:38.576 "thread": "nvmf_tgt_poll_group_000", 00:13:38.576 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a", 00:13:38.576 "listen_address": { 00:13:38.576 "trtype": "TCP", 00:13:38.576 "adrfam": "IPv4", 00:13:38.576 "traddr": "10.0.0.3", 00:13:38.576 "trsvcid": "4420" 00:13:38.576 }, 00:13:38.576 "peer_address": { 00:13:38.576 "trtype": "TCP", 00:13:38.576 "adrfam": "IPv4", 00:13:38.576 "traddr": "10.0.0.1", 00:13:38.576 "trsvcid": "39424" 00:13:38.576 }, 00:13:38.576 "auth": { 00:13:38.576 "state": "completed", 00:13:38.576 "digest": "sha384", 00:13:38.576 "dhgroup": "null" 00:13:38.576 } 00:13:38.576 } 00:13:38.576 ]' 00:13:38.576 23:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:38.576 23:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:38.576 23:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:38.576 23:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:38.576 23:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:38.835 23:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:38.835 23:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:38.835 23:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:39.094 23:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjhhY2Y4YjY1YTg2M2Y0OGZlZWEyNjdhOTI0ODkzNjA2MmI5NTE0NjUzMmMzNzU5krlUGw==: --dhchap-ctrl-secret DHHC-1:03:NDk1YzM1YTc0OGM3NDJmZjYxY2M0MzI5YWRmODg3NGZlOTFkZDViZWNiNmFjYzE4YmQ1MjRlMjkyOGM1NTNlYvZPfj4=: 00:13:39.094 23:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --hostid 2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -l 0 --dhchap-secret DHHC-1:00:YjhhY2Y4YjY1YTg2M2Y0OGZlZWEyNjdhOTI0ODkzNjA2MmI5NTE0NjUzMmMzNzU5krlUGw==: --dhchap-ctrl-secret DHHC-1:03:NDk1YzM1YTc0OGM3NDJmZjYxY2M0MzI5YWRmODg3NGZlOTFkZDViZWNiNmFjYzE4YmQ1MjRlMjkyOGM1NTNlYvZPfj4=: 00:13:39.662 23:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:39.662 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:39.662 23:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:13:39.662 23:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.662 23:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.662 23:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.662 23:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:39.662 23:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:39.662 23:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:39.921 23:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:13:39.921 23:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:39.921 23:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:39.921 23:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:39.921 23:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:39.921 23:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:39.921 23:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:39.921 23:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.921 23:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.921 23:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.921 23:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:39.921 23:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:39.921 23:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:40.179 00:13:40.179 23:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:40.179 23:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:40.179 23:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:40.747 23:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:40.747 23:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:40.747 23:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.747 23:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:40.747 23:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.747 23:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:40.747 { 00:13:40.747 "cntlid": 51, 00:13:40.747 "qid": 0, 00:13:40.747 "state": "enabled", 00:13:40.747 "thread": "nvmf_tgt_poll_group_000", 00:13:40.747 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a", 00:13:40.747 "listen_address": { 00:13:40.747 "trtype": "TCP", 00:13:40.747 "adrfam": "IPv4", 00:13:40.747 "traddr": "10.0.0.3", 00:13:40.747 "trsvcid": "4420" 00:13:40.747 }, 00:13:40.747 "peer_address": { 00:13:40.747 "trtype": "TCP", 00:13:40.747 "adrfam": "IPv4", 00:13:40.747 "traddr": "10.0.0.1", 00:13:40.747 "trsvcid": "39442" 00:13:40.747 }, 00:13:40.747 "auth": { 00:13:40.747 "state": "completed", 00:13:40.747 "digest": "sha384", 00:13:40.747 "dhgroup": "null" 00:13:40.747 } 00:13:40.747 } 00:13:40.747 ]' 00:13:40.747 23:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:40.747 23:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:40.747 23:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:40.747 23:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:40.747 23:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:40.747 23:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:40.747 23:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:40.747 23:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:41.006 23:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzJkZDZmMDQ2ZTEzZGMyNmE5MzFkNDM3ZDNlZmUwYWUQ1o38: --dhchap-ctrl-secret DHHC-1:02:OTM2NWYxOGFhOGM1YWU5MThlMGE2MGY5YWFiMzZjNDdlYjYwZDVlYWFjMzZiMjhiETNs8g==: 00:13:41.006 23:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --hostid 2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -l 0 --dhchap-secret DHHC-1:01:NzJkZDZmMDQ2ZTEzZGMyNmE5MzFkNDM3ZDNlZmUwYWUQ1o38: --dhchap-ctrl-secret DHHC-1:02:OTM2NWYxOGFhOGM1YWU5MThlMGE2MGY5YWFiMzZjNDdlYjYwZDVlYWFjMzZiMjhiETNs8g==: 00:13:41.572 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:41.572 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:41.573 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:13:41.573 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.573 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:41.573 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.573 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:41.573 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:41.573 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:41.831 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:13:41.831 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:41.831 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:41.831 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:41.831 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:41.831 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:41.831 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:41.831 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.831 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:41.831 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.831 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:41.831 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:41.831 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:42.090 00:13:42.090 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:42.090 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:42.090 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:42.656 23:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:42.656 23:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:42.656 23:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.656 23:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:42.656 23:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.656 23:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:42.656 { 00:13:42.656 "cntlid": 53, 00:13:42.656 "qid": 0, 00:13:42.656 "state": "enabled", 00:13:42.656 "thread": "nvmf_tgt_poll_group_000", 00:13:42.656 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a", 00:13:42.656 "listen_address": { 00:13:42.656 "trtype": "TCP", 00:13:42.656 "adrfam": "IPv4", 00:13:42.656 "traddr": "10.0.0.3", 00:13:42.656 "trsvcid": "4420" 00:13:42.656 }, 00:13:42.656 "peer_address": { 00:13:42.656 "trtype": "TCP", 00:13:42.656 "adrfam": "IPv4", 00:13:42.656 "traddr": "10.0.0.1", 00:13:42.656 "trsvcid": "39462" 00:13:42.656 }, 00:13:42.656 "auth": { 00:13:42.656 "state": "completed", 00:13:42.656 "digest": "sha384", 00:13:42.656 "dhgroup": "null" 00:13:42.656 } 00:13:42.656 } 00:13:42.656 ]' 00:13:42.656 23:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:42.656 23:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:42.656 23:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:42.656 23:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:42.656 23:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:42.656 23:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:42.656 23:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:42.656 23:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:42.914 23:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjViMjFjYTk4NGFlNDUyODJkOGQwZjgzNTE3YzFhZjIzZTIyNDA5MDYwOGNjZGJmVA9WuA==: --dhchap-ctrl-secret DHHC-1:01:YTJjYjQ5ZDJmMTYwNWRhNTIyYzc1ODJhYTYzYmQ0ZWHxbK6/: 00:13:42.914 23:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --hostid 2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -l 0 --dhchap-secret DHHC-1:02:YjViMjFjYTk4NGFlNDUyODJkOGQwZjgzNTE3YzFhZjIzZTIyNDA5MDYwOGNjZGJmVA9WuA==: --dhchap-ctrl-secret DHHC-1:01:YTJjYjQ5ZDJmMTYwNWRhNTIyYzc1ODJhYTYzYmQ0ZWHxbK6/: 00:13:43.851 23:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:43.851 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:43.851 23:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:13:43.851 23:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.851 23:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:43.851 23:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.851 23:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:43.851 23:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:43.851 23:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:44.110 23:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:13:44.110 23:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:44.110 23:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:44.110 23:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:44.110 23:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:44.110 23:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:44.110 23:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --dhchap-key key3 00:13:44.110 23:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.110 23:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.110 23:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.110 23:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:44.110 23:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:44.110 23:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:44.369 00:13:44.369 23:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:44.369 23:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:44.369 23:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:44.628 23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:44.628 23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:44.628 23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.628 23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.629 23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.629 23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:44.629 { 00:13:44.629 "cntlid": 55, 00:13:44.629 "qid": 0, 00:13:44.629 "state": "enabled", 00:13:44.629 "thread": "nvmf_tgt_poll_group_000", 00:13:44.629 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a", 00:13:44.629 "listen_address": { 00:13:44.629 "trtype": "TCP", 00:13:44.629 "adrfam": "IPv4", 00:13:44.629 "traddr": "10.0.0.3", 00:13:44.629 "trsvcid": "4420" 00:13:44.629 }, 00:13:44.629 "peer_address": { 00:13:44.629 "trtype": "TCP", 00:13:44.629 "adrfam": "IPv4", 00:13:44.629 "traddr": "10.0.0.1", 00:13:44.629 "trsvcid": "57496" 00:13:44.629 }, 00:13:44.629 "auth": { 00:13:44.629 "state": "completed", 00:13:44.629 "digest": "sha384", 00:13:44.629 "dhgroup": "null" 00:13:44.629 } 00:13:44.629 } 00:13:44.629 ]' 00:13:44.629 23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:44.629 23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:44.629 23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:44.629 23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:44.629 23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:44.629 23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:44.629 23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:44.629 23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:45.197 23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjUwYzA1MTFjNzhlYTc4NjY1MTA3YzBkMmQzNDhmOThkODM2Yzc2Njk2NzQyNGI2YzMxZGE0MjU3OGQxNTNiMQdxwhM=: 00:13:45.197 23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --hostid 2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -l 0 --dhchap-secret DHHC-1:03:NjUwYzA1MTFjNzhlYTc4NjY1MTA3YzBkMmQzNDhmOThkODM2Yzc2Njk2NzQyNGI2YzMxZGE0MjU3OGQxNTNiMQdxwhM=: 00:13:45.765 23:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:45.765 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:45.765 23:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:13:45.765 23:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.765 23:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.765 23:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.765 23:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:45.765 23:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:45.765 23:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:45.765 23:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:46.024 23:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:13:46.024 23:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:46.024 23:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:46.024 23:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:46.024 23:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:46.024 23:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:46.024 23:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:46.024 23:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.024 23:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:46.024 23:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.024 23:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:46.024 23:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:46.025 23:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:46.284 00:13:46.284 23:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:46.284 23:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:46.284 23:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:46.851 23:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:46.851 23:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:46.851 23:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.851 23:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:46.851 23:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.851 23:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:46.851 { 00:13:46.851 "cntlid": 57, 00:13:46.851 "qid": 0, 00:13:46.851 "state": "enabled", 00:13:46.851 "thread": "nvmf_tgt_poll_group_000", 00:13:46.851 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a", 00:13:46.851 "listen_address": { 00:13:46.851 "trtype": "TCP", 00:13:46.851 "adrfam": "IPv4", 00:13:46.851 "traddr": "10.0.0.3", 00:13:46.851 "trsvcid": "4420" 00:13:46.851 }, 00:13:46.851 "peer_address": { 00:13:46.851 "trtype": "TCP", 00:13:46.851 "adrfam": "IPv4", 00:13:46.851 "traddr": "10.0.0.1", 00:13:46.851 "trsvcid": "57534" 00:13:46.851 }, 00:13:46.851 "auth": { 00:13:46.851 "state": "completed", 00:13:46.851 "digest": "sha384", 00:13:46.851 "dhgroup": "ffdhe2048" 00:13:46.851 } 00:13:46.851 } 00:13:46.851 ]' 00:13:46.851 23:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:46.851 23:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:46.851 23:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:46.851 23:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:46.851 23:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:46.851 23:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:46.851 23:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:46.851 23:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:47.110 23:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjhhY2Y4YjY1YTg2M2Y0OGZlZWEyNjdhOTI0ODkzNjA2MmI5NTE0NjUzMmMzNzU5krlUGw==: --dhchap-ctrl-secret DHHC-1:03:NDk1YzM1YTc0OGM3NDJmZjYxY2M0MzI5YWRmODg3NGZlOTFkZDViZWNiNmFjYzE4YmQ1MjRlMjkyOGM1NTNlYvZPfj4=: 00:13:47.110 23:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --hostid 2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -l 0 --dhchap-secret DHHC-1:00:YjhhY2Y4YjY1YTg2M2Y0OGZlZWEyNjdhOTI0ODkzNjA2MmI5NTE0NjUzMmMzNzU5krlUGw==: --dhchap-ctrl-secret DHHC-1:03:NDk1YzM1YTc0OGM3NDJmZjYxY2M0MzI5YWRmODg3NGZlOTFkZDViZWNiNmFjYzE4YmQ1MjRlMjkyOGM1NTNlYvZPfj4=: 00:13:47.690 23:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:47.690 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:47.690 23:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:13:47.690 23:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.690 23:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:47.690 23:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.690 23:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:47.690 23:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:47.690 23:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:47.958 23:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:13:47.958 23:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:47.958 23:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:47.958 23:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:47.958 23:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:47.958 23:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:47.958 23:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:47.958 23:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.958 23:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:47.958 23:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.958 23:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:47.958 23:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:47.959 23:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:48.527 00:13:48.527 23:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:48.527 23:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:48.527 23:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:48.527 23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:48.527 23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:48.527 23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.527 23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.527 23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.527 23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:48.527 { 00:13:48.527 "cntlid": 59, 00:13:48.527 "qid": 0, 00:13:48.527 "state": "enabled", 00:13:48.527 "thread": "nvmf_tgt_poll_group_000", 00:13:48.527 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a", 00:13:48.527 "listen_address": { 00:13:48.527 "trtype": "TCP", 00:13:48.527 "adrfam": "IPv4", 00:13:48.527 "traddr": "10.0.0.3", 00:13:48.527 "trsvcid": "4420" 00:13:48.527 }, 00:13:48.527 "peer_address": { 00:13:48.527 "trtype": "TCP", 00:13:48.527 "adrfam": "IPv4", 00:13:48.527 "traddr": "10.0.0.1", 00:13:48.527 "trsvcid": "57560" 00:13:48.527 }, 00:13:48.527 "auth": { 00:13:48.527 "state": "completed", 00:13:48.527 "digest": "sha384", 00:13:48.527 "dhgroup": "ffdhe2048" 00:13:48.527 } 00:13:48.527 } 00:13:48.527 ]' 00:13:48.527 23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:48.786 23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:48.786 23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:48.786 23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:48.786 23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:48.786 23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:48.786 23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:48.786 23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:49.045 23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzJkZDZmMDQ2ZTEzZGMyNmE5MzFkNDM3ZDNlZmUwYWUQ1o38: --dhchap-ctrl-secret DHHC-1:02:OTM2NWYxOGFhOGM1YWU5MThlMGE2MGY5YWFiMzZjNDdlYjYwZDVlYWFjMzZiMjhiETNs8g==: 00:13:49.045 23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --hostid 2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -l 0 --dhchap-secret DHHC-1:01:NzJkZDZmMDQ2ZTEzZGMyNmE5MzFkNDM3ZDNlZmUwYWUQ1o38: --dhchap-ctrl-secret DHHC-1:02:OTM2NWYxOGFhOGM1YWU5MThlMGE2MGY5YWFiMzZjNDdlYjYwZDVlYWFjMzZiMjhiETNs8g==: 00:13:49.613 23:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:49.613 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:49.613 23:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:13:49.613 23:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.613 23:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.613 23:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.613 23:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:49.613 23:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:49.613 23:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:49.873 23:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:13:49.873 23:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:49.873 23:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:49.873 23:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:49.873 23:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:49.873 23:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:49.873 23:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:49.873 23:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.873 23:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.130 23:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.130 23:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:50.130 23:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:50.130 23:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:50.388 00:13:50.388 23:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:50.388 23:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:50.388 23:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:50.647 23:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:50.647 23:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:50.647 23:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.647 23:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.647 23:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.647 23:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:50.647 { 00:13:50.647 "cntlid": 61, 00:13:50.647 "qid": 0, 00:13:50.647 "state": "enabled", 00:13:50.647 "thread": "nvmf_tgt_poll_group_000", 00:13:50.647 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a", 00:13:50.647 "listen_address": { 00:13:50.647 "trtype": "TCP", 00:13:50.647 "adrfam": "IPv4", 00:13:50.647 "traddr": "10.0.0.3", 00:13:50.647 "trsvcid": "4420" 00:13:50.647 }, 00:13:50.647 "peer_address": { 00:13:50.647 "trtype": "TCP", 00:13:50.647 "adrfam": "IPv4", 00:13:50.647 "traddr": "10.0.0.1", 00:13:50.647 "trsvcid": "57590" 00:13:50.647 }, 00:13:50.647 "auth": { 00:13:50.647 "state": "completed", 00:13:50.647 "digest": "sha384", 00:13:50.647 "dhgroup": "ffdhe2048" 00:13:50.647 } 00:13:50.647 } 00:13:50.647 ]' 00:13:50.647 23:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:50.647 23:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:50.647 23:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:50.647 23:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:50.647 23:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:50.906 23:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:50.906 23:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:50.906 23:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:51.165 23:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjViMjFjYTk4NGFlNDUyODJkOGQwZjgzNTE3YzFhZjIzZTIyNDA5MDYwOGNjZGJmVA9WuA==: --dhchap-ctrl-secret DHHC-1:01:YTJjYjQ5ZDJmMTYwNWRhNTIyYzc1ODJhYTYzYmQ0ZWHxbK6/: 00:13:51.165 23:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --hostid 2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -l 0 --dhchap-secret DHHC-1:02:YjViMjFjYTk4NGFlNDUyODJkOGQwZjgzNTE3YzFhZjIzZTIyNDA5MDYwOGNjZGJmVA9WuA==: --dhchap-ctrl-secret DHHC-1:01:YTJjYjQ5ZDJmMTYwNWRhNTIyYzc1ODJhYTYzYmQ0ZWHxbK6/: 00:13:51.732 23:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:51.732 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:51.732 23:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:13:51.732 23:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.732 23:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.732 23:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.732 23:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:51.732 23:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:51.732 23:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:51.990 23:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:13:51.990 23:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:51.990 23:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:51.990 23:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:51.990 23:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:51.990 23:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:51.990 23:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --dhchap-key key3 00:13:51.990 23:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.990 23:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.990 23:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.990 23:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:51.990 23:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:51.990 23:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:52.557 00:13:52.557 23:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:52.557 23:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:52.557 23:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:52.815 23:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:52.815 23:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:52.815 23:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.815 23:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.815 23:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.815 23:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:52.815 { 00:13:52.815 "cntlid": 63, 00:13:52.815 "qid": 0, 00:13:52.815 "state": "enabled", 00:13:52.815 "thread": "nvmf_tgt_poll_group_000", 00:13:52.815 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a", 00:13:52.815 "listen_address": { 00:13:52.815 "trtype": "TCP", 00:13:52.815 "adrfam": "IPv4", 00:13:52.815 "traddr": "10.0.0.3", 00:13:52.815 "trsvcid": "4420" 00:13:52.815 }, 00:13:52.815 "peer_address": { 00:13:52.815 "trtype": "TCP", 00:13:52.815 "adrfam": "IPv4", 00:13:52.815 "traddr": "10.0.0.1", 00:13:52.815 "trsvcid": "57626" 00:13:52.815 }, 00:13:52.815 "auth": { 00:13:52.815 "state": "completed", 00:13:52.815 "digest": "sha384", 00:13:52.815 "dhgroup": "ffdhe2048" 00:13:52.815 } 00:13:52.815 } 00:13:52.815 ]' 00:13:52.815 23:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:52.815 23:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:52.815 23:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:52.815 23:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:52.815 23:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:53.073 23:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:53.073 23:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:53.073 23:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:53.332 23:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjUwYzA1MTFjNzhlYTc4NjY1MTA3YzBkMmQzNDhmOThkODM2Yzc2Njk2NzQyNGI2YzMxZGE0MjU3OGQxNTNiMQdxwhM=: 00:13:53.332 23:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --hostid 2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -l 0 --dhchap-secret DHHC-1:03:NjUwYzA1MTFjNzhlYTc4NjY1MTA3YzBkMmQzNDhmOThkODM2Yzc2Njk2NzQyNGI2YzMxZGE0MjU3OGQxNTNiMQdxwhM=: 00:13:53.899 23:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:53.899 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:53.899 23:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:13:53.899 23:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.899 23:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.899 23:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.899 23:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:53.899 23:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:53.899 23:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:53.899 23:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:54.159 23:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:13:54.159 23:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:54.159 23:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:54.159 23:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:54.159 23:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:54.159 23:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:54.159 23:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:54.159 23:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.159 23:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.159 23:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.159 23:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:54.159 23:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:54.159 23:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:54.726 00:13:54.726 23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:54.726 23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:54.726 23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:54.984 23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:54.984 23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:54.984 23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.984 23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.984 23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.984 23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:54.984 { 00:13:54.984 "cntlid": 65, 00:13:54.984 "qid": 0, 00:13:54.984 "state": "enabled", 00:13:54.984 "thread": "nvmf_tgt_poll_group_000", 00:13:54.984 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a", 00:13:54.984 "listen_address": { 00:13:54.984 "trtype": "TCP", 00:13:54.984 "adrfam": "IPv4", 00:13:54.984 "traddr": "10.0.0.3", 00:13:54.984 "trsvcid": "4420" 00:13:54.984 }, 00:13:54.984 "peer_address": { 00:13:54.984 "trtype": "TCP", 00:13:54.984 "adrfam": "IPv4", 00:13:54.984 "traddr": "10.0.0.1", 00:13:54.984 "trsvcid": "57714" 00:13:54.984 }, 00:13:54.984 "auth": { 00:13:54.984 "state": "completed", 00:13:54.984 "digest": "sha384", 00:13:54.984 "dhgroup": "ffdhe3072" 00:13:54.984 } 00:13:54.984 } 00:13:54.984 ]' 00:13:54.984 23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:54.984 23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:54.984 23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:54.984 23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:54.984 23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:54.984 23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:54.984 23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:54.984 23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:55.243 23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjhhY2Y4YjY1YTg2M2Y0OGZlZWEyNjdhOTI0ODkzNjA2MmI5NTE0NjUzMmMzNzU5krlUGw==: --dhchap-ctrl-secret DHHC-1:03:NDk1YzM1YTc0OGM3NDJmZjYxY2M0MzI5YWRmODg3NGZlOTFkZDViZWNiNmFjYzE4YmQ1MjRlMjkyOGM1NTNlYvZPfj4=: 00:13:55.243 23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --hostid 2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -l 0 --dhchap-secret DHHC-1:00:YjhhY2Y4YjY1YTg2M2Y0OGZlZWEyNjdhOTI0ODkzNjA2MmI5NTE0NjUzMmMzNzU5krlUGw==: --dhchap-ctrl-secret DHHC-1:03:NDk1YzM1YTc0OGM3NDJmZjYxY2M0MzI5YWRmODg3NGZlOTFkZDViZWNiNmFjYzE4YmQ1MjRlMjkyOGM1NTNlYvZPfj4=: 00:13:55.810 23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:56.069 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:56.069 23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:13:56.069 23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.070 23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.070 23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.070 23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:56.070 23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:56.070 23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:56.329 23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:13:56.329 23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:56.329 23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:56.329 23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:56.329 23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:56.329 23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:56.329 23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:56.329 23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.329 23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.329 23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.329 23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:56.329 23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:56.329 23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:56.588 00:13:56.588 23:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:56.588 23:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:56.588 23:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:56.847 23:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:56.847 23:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:56.847 23:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.847 23:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.847 23:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.847 23:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:56.847 { 00:13:56.847 "cntlid": 67, 00:13:56.847 "qid": 0, 00:13:56.847 "state": "enabled", 00:13:56.847 "thread": "nvmf_tgt_poll_group_000", 00:13:56.847 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a", 00:13:56.847 "listen_address": { 00:13:56.847 "trtype": "TCP", 00:13:56.847 "adrfam": "IPv4", 00:13:56.847 "traddr": "10.0.0.3", 00:13:56.847 "trsvcid": "4420" 00:13:56.847 }, 00:13:56.847 "peer_address": { 00:13:56.847 "trtype": "TCP", 00:13:56.847 "adrfam": "IPv4", 00:13:56.847 "traddr": "10.0.0.1", 00:13:56.847 "trsvcid": "57738" 00:13:56.847 }, 00:13:56.847 "auth": { 00:13:56.847 "state": "completed", 00:13:56.847 "digest": "sha384", 00:13:56.847 "dhgroup": "ffdhe3072" 00:13:56.847 } 00:13:56.847 } 00:13:56.847 ]' 00:13:56.847 23:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:56.847 23:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:56.847 23:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:57.106 23:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:57.106 23:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:57.106 23:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:57.106 23:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:57.106 23:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:57.365 23:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzJkZDZmMDQ2ZTEzZGMyNmE5MzFkNDM3ZDNlZmUwYWUQ1o38: --dhchap-ctrl-secret DHHC-1:02:OTM2NWYxOGFhOGM1YWU5MThlMGE2MGY5YWFiMzZjNDdlYjYwZDVlYWFjMzZiMjhiETNs8g==: 00:13:57.365 23:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --hostid 2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -l 0 --dhchap-secret DHHC-1:01:NzJkZDZmMDQ2ZTEzZGMyNmE5MzFkNDM3ZDNlZmUwYWUQ1o38: --dhchap-ctrl-secret DHHC-1:02:OTM2NWYxOGFhOGM1YWU5MThlMGE2MGY5YWFiMzZjNDdlYjYwZDVlYWFjMzZiMjhiETNs8g==: 00:13:57.932 23:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:57.932 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:57.932 23:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:13:57.932 23:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.932 23:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:57.932 23:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.932 23:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:57.932 23:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:57.932 23:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:58.191 23:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:13:58.191 23:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:58.191 23:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:58.191 23:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:58.191 23:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:58.191 23:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:58.191 23:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:58.191 23:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.191 23:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.191 23:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.191 23:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:58.191 23:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:58.191 23:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:58.762 00:13:58.762 23:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:58.762 23:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:58.762 23:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:58.762 23:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:58.762 23:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:58.762 23:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.762 23:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.762 23:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.762 23:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:58.762 { 00:13:58.762 "cntlid": 69, 00:13:58.762 "qid": 0, 00:13:58.762 "state": "enabled", 00:13:58.762 "thread": "nvmf_tgt_poll_group_000", 00:13:58.762 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a", 00:13:58.762 "listen_address": { 00:13:58.762 "trtype": "TCP", 00:13:58.762 "adrfam": "IPv4", 00:13:58.762 "traddr": "10.0.0.3", 00:13:58.762 "trsvcid": "4420" 00:13:58.762 }, 00:13:58.762 "peer_address": { 00:13:58.762 "trtype": "TCP", 00:13:58.762 "adrfam": "IPv4", 00:13:58.762 "traddr": "10.0.0.1", 00:13:58.762 "trsvcid": "57770" 00:13:58.762 }, 00:13:58.762 "auth": { 00:13:58.762 "state": "completed", 00:13:58.762 "digest": "sha384", 00:13:58.762 "dhgroup": "ffdhe3072" 00:13:58.762 } 00:13:58.762 } 00:13:58.762 ]' 00:13:58.763 23:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:59.021 23:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:59.021 23:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:59.021 23:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:59.021 23:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:59.022 23:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:59.022 23:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:59.022 23:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:59.282 23:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjViMjFjYTk4NGFlNDUyODJkOGQwZjgzNTE3YzFhZjIzZTIyNDA5MDYwOGNjZGJmVA9WuA==: --dhchap-ctrl-secret DHHC-1:01:YTJjYjQ5ZDJmMTYwNWRhNTIyYzc1ODJhYTYzYmQ0ZWHxbK6/: 00:13:59.282 23:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --hostid 2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -l 0 --dhchap-secret DHHC-1:02:YjViMjFjYTk4NGFlNDUyODJkOGQwZjgzNTE3YzFhZjIzZTIyNDA5MDYwOGNjZGJmVA9WuA==: --dhchap-ctrl-secret DHHC-1:01:YTJjYjQ5ZDJmMTYwNWRhNTIyYzc1ODJhYTYzYmQ0ZWHxbK6/: 00:13:59.850 23:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:59.850 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:59.850 23:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:13:59.850 23:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.850 23:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.850 23:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.850 23:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:59.850 23:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:59.850 23:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:00.109 23:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:14:00.109 23:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:00.109 23:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:00.109 23:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:00.109 23:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:00.109 23:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:00.109 23:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --dhchap-key key3 00:14:00.109 23:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.109 23:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:00.109 23:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.109 23:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:00.109 23:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:00.109 23:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:00.677 00:14:00.677 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:00.677 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:00.677 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:00.937 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:00.937 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:00.937 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.937 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:00.937 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.937 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:00.937 { 00:14:00.937 "cntlid": 71, 00:14:00.937 "qid": 0, 00:14:00.937 "state": "enabled", 00:14:00.937 "thread": "nvmf_tgt_poll_group_000", 00:14:00.937 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a", 00:14:00.937 "listen_address": { 00:14:00.937 "trtype": "TCP", 00:14:00.937 "adrfam": "IPv4", 00:14:00.937 "traddr": "10.0.0.3", 00:14:00.937 "trsvcid": "4420" 00:14:00.937 }, 00:14:00.937 "peer_address": { 00:14:00.937 "trtype": "TCP", 00:14:00.937 "adrfam": "IPv4", 00:14:00.937 "traddr": "10.0.0.1", 00:14:00.937 "trsvcid": "57792" 00:14:00.937 }, 00:14:00.937 "auth": { 00:14:00.937 "state": "completed", 00:14:00.937 "digest": "sha384", 00:14:00.937 "dhgroup": "ffdhe3072" 00:14:00.937 } 00:14:00.937 } 00:14:00.937 ]' 00:14:00.937 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:00.937 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:00.937 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:01.225 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:01.225 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:01.225 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:01.225 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:01.225 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:01.513 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjUwYzA1MTFjNzhlYTc4NjY1MTA3YzBkMmQzNDhmOThkODM2Yzc2Njk2NzQyNGI2YzMxZGE0MjU3OGQxNTNiMQdxwhM=: 00:14:01.513 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --hostid 2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -l 0 --dhchap-secret DHHC-1:03:NjUwYzA1MTFjNzhlYTc4NjY1MTA3YzBkMmQzNDhmOThkODM2Yzc2Njk2NzQyNGI2YzMxZGE0MjU3OGQxNTNiMQdxwhM=: 00:14:02.081 23:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:02.081 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:02.081 23:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:14:02.081 23:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.081 23:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.081 23:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.081 23:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:02.081 23:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:02.081 23:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:02.081 23:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:02.340 23:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:14:02.340 23:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:02.340 23:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:02.340 23:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:02.340 23:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:02.340 23:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:02.340 23:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:02.340 23:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.340 23:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.340 23:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.340 23:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:02.340 23:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:02.340 23:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:02.599 00:14:02.599 23:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:02.599 23:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:02.599 23:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:02.857 23:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:02.857 23:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:02.857 23:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.857 23:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.857 23:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.857 23:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:02.857 { 00:14:02.857 "cntlid": 73, 00:14:02.857 "qid": 0, 00:14:02.857 "state": "enabled", 00:14:02.857 "thread": "nvmf_tgt_poll_group_000", 00:14:02.857 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a", 00:14:02.857 "listen_address": { 00:14:02.857 "trtype": "TCP", 00:14:02.857 "adrfam": "IPv4", 00:14:02.857 "traddr": "10.0.0.3", 00:14:02.857 "trsvcid": "4420" 00:14:02.857 }, 00:14:02.857 "peer_address": { 00:14:02.857 "trtype": "TCP", 00:14:02.857 "adrfam": "IPv4", 00:14:02.857 "traddr": "10.0.0.1", 00:14:02.857 "trsvcid": "57832" 00:14:02.857 }, 00:14:02.857 "auth": { 00:14:02.857 "state": "completed", 00:14:02.857 "digest": "sha384", 00:14:02.857 "dhgroup": "ffdhe4096" 00:14:02.857 } 00:14:02.857 } 00:14:02.857 ]' 00:14:02.857 23:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:02.857 23:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:02.857 23:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:03.116 23:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:03.116 23:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:03.116 23:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:03.116 23:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:03.116 23:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:03.375 23:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjhhY2Y4YjY1YTg2M2Y0OGZlZWEyNjdhOTI0ODkzNjA2MmI5NTE0NjUzMmMzNzU5krlUGw==: --dhchap-ctrl-secret DHHC-1:03:NDk1YzM1YTc0OGM3NDJmZjYxY2M0MzI5YWRmODg3NGZlOTFkZDViZWNiNmFjYzE4YmQ1MjRlMjkyOGM1NTNlYvZPfj4=: 00:14:03.375 23:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --hostid 2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -l 0 --dhchap-secret DHHC-1:00:YjhhY2Y4YjY1YTg2M2Y0OGZlZWEyNjdhOTI0ODkzNjA2MmI5NTE0NjUzMmMzNzU5krlUGw==: --dhchap-ctrl-secret DHHC-1:03:NDk1YzM1YTc0OGM3NDJmZjYxY2M0MzI5YWRmODg3NGZlOTFkZDViZWNiNmFjYzE4YmQ1MjRlMjkyOGM1NTNlYvZPfj4=: 00:14:03.943 23:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:03.943 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:03.943 23:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:14:03.943 23:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.943 23:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.943 23:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.943 23:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:03.943 23:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:03.943 23:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:04.202 23:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:14:04.202 23:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:04.202 23:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:04.202 23:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:04.202 23:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:04.202 23:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:04.202 23:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:04.202 23:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.202 23:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.461 23:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.461 23:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:04.461 23:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:04.461 23:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:04.720 00:14:04.720 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:04.720 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:04.720 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:04.978 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:04.978 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:04.979 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.979 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.979 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.979 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:04.979 { 00:14:04.979 "cntlid": 75, 00:14:04.979 "qid": 0, 00:14:04.979 "state": "enabled", 00:14:04.979 "thread": "nvmf_tgt_poll_group_000", 00:14:04.979 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a", 00:14:04.979 "listen_address": { 00:14:04.979 "trtype": "TCP", 00:14:04.979 "adrfam": "IPv4", 00:14:04.979 "traddr": "10.0.0.3", 00:14:04.979 "trsvcid": "4420" 00:14:04.979 }, 00:14:04.979 "peer_address": { 00:14:04.979 "trtype": "TCP", 00:14:04.979 "adrfam": "IPv4", 00:14:04.979 "traddr": "10.0.0.1", 00:14:04.979 "trsvcid": "42778" 00:14:04.979 }, 00:14:04.979 "auth": { 00:14:04.979 "state": "completed", 00:14:04.979 "digest": "sha384", 00:14:04.979 "dhgroup": "ffdhe4096" 00:14:04.979 } 00:14:04.979 } 00:14:04.979 ]' 00:14:04.979 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:04.979 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:04.979 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:04.979 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:04.979 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:05.238 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:05.238 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:05.238 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:05.238 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzJkZDZmMDQ2ZTEzZGMyNmE5MzFkNDM3ZDNlZmUwYWUQ1o38: --dhchap-ctrl-secret DHHC-1:02:OTM2NWYxOGFhOGM1YWU5MThlMGE2MGY5YWFiMzZjNDdlYjYwZDVlYWFjMzZiMjhiETNs8g==: 00:14:05.238 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --hostid 2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -l 0 --dhchap-secret DHHC-1:01:NzJkZDZmMDQ2ZTEzZGMyNmE5MzFkNDM3ZDNlZmUwYWUQ1o38: --dhchap-ctrl-secret DHHC-1:02:OTM2NWYxOGFhOGM1YWU5MThlMGE2MGY5YWFiMzZjNDdlYjYwZDVlYWFjMzZiMjhiETNs8g==: 00:14:06.174 23:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:06.174 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:06.174 23:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:14:06.174 23:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.174 23:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:06.174 23:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.174 23:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:06.174 23:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:06.174 23:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:06.432 23:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:14:06.432 23:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:06.432 23:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:06.432 23:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:06.432 23:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:06.432 23:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:06.432 23:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:06.432 23:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.432 23:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:06.432 23:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.432 23:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:06.432 23:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:06.432 23:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:06.691 00:14:06.691 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:06.691 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:06.691 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:06.951 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:06.951 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:06.951 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.951 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:06.951 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.951 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:06.951 { 00:14:06.951 "cntlid": 77, 00:14:06.951 "qid": 0, 00:14:06.951 "state": "enabled", 00:14:06.951 "thread": "nvmf_tgt_poll_group_000", 00:14:06.951 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a", 00:14:06.951 "listen_address": { 00:14:06.951 "trtype": "TCP", 00:14:06.951 "adrfam": "IPv4", 00:14:06.951 "traddr": "10.0.0.3", 00:14:06.951 "trsvcid": "4420" 00:14:06.951 }, 00:14:06.951 "peer_address": { 00:14:06.951 "trtype": "TCP", 00:14:06.951 "adrfam": "IPv4", 00:14:06.951 "traddr": "10.0.0.1", 00:14:06.951 "trsvcid": "42806" 00:14:06.951 }, 00:14:06.951 "auth": { 00:14:06.951 "state": "completed", 00:14:06.951 "digest": "sha384", 00:14:06.951 "dhgroup": "ffdhe4096" 00:14:06.951 } 00:14:06.951 } 00:14:06.951 ]' 00:14:06.951 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:06.951 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:06.951 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:07.209 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:07.209 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:07.209 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:07.209 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:07.209 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:07.468 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjViMjFjYTk4NGFlNDUyODJkOGQwZjgzNTE3YzFhZjIzZTIyNDA5MDYwOGNjZGJmVA9WuA==: --dhchap-ctrl-secret DHHC-1:01:YTJjYjQ5ZDJmMTYwNWRhNTIyYzc1ODJhYTYzYmQ0ZWHxbK6/: 00:14:07.468 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --hostid 2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -l 0 --dhchap-secret DHHC-1:02:YjViMjFjYTk4NGFlNDUyODJkOGQwZjgzNTE3YzFhZjIzZTIyNDA5MDYwOGNjZGJmVA9WuA==: --dhchap-ctrl-secret DHHC-1:01:YTJjYjQ5ZDJmMTYwNWRhNTIyYzc1ODJhYTYzYmQ0ZWHxbK6/: 00:14:08.036 23:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:08.036 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:08.036 23:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:14:08.036 23:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.036 23:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.036 23:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.036 23:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:08.036 23:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:08.036 23:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:08.295 23:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:14:08.295 23:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:08.295 23:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:08.295 23:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:08.295 23:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:08.295 23:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:08.295 23:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --dhchap-key key3 00:14:08.295 23:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.295 23:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.554 23:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.554 23:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:08.554 23:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:08.554 23:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:08.812 00:14:08.812 23:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:08.812 23:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:08.812 23:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:09.070 23:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:09.070 23:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:09.070 23:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.070 23:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.070 23:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.070 23:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:09.070 { 00:14:09.070 "cntlid": 79, 00:14:09.070 "qid": 0, 00:14:09.070 "state": "enabled", 00:14:09.070 "thread": "nvmf_tgt_poll_group_000", 00:14:09.070 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a", 00:14:09.070 "listen_address": { 00:14:09.070 "trtype": "TCP", 00:14:09.070 "adrfam": "IPv4", 00:14:09.070 "traddr": "10.0.0.3", 00:14:09.070 "trsvcid": "4420" 00:14:09.070 }, 00:14:09.070 "peer_address": { 00:14:09.070 "trtype": "TCP", 00:14:09.070 "adrfam": "IPv4", 00:14:09.070 "traddr": "10.0.0.1", 00:14:09.070 "trsvcid": "42846" 00:14:09.070 }, 00:14:09.070 "auth": { 00:14:09.070 "state": "completed", 00:14:09.070 "digest": "sha384", 00:14:09.070 "dhgroup": "ffdhe4096" 00:14:09.070 } 00:14:09.070 } 00:14:09.070 ]' 00:14:09.070 23:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:09.329 23:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:09.329 23:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:09.329 23:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:09.329 23:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:09.329 23:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:09.329 23:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:09.329 23:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:09.588 23:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjUwYzA1MTFjNzhlYTc4NjY1MTA3YzBkMmQzNDhmOThkODM2Yzc2Njk2NzQyNGI2YzMxZGE0MjU3OGQxNTNiMQdxwhM=: 00:14:09.588 23:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --hostid 2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -l 0 --dhchap-secret DHHC-1:03:NjUwYzA1MTFjNzhlYTc4NjY1MTA3YzBkMmQzNDhmOThkODM2Yzc2Njk2NzQyNGI2YzMxZGE0MjU3OGQxNTNiMQdxwhM=: 00:14:10.154 23:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:10.154 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:10.154 23:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:14:10.154 23:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.154 23:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.154 23:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.154 23:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:10.154 23:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:10.154 23:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:10.154 23:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:10.414 23:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:14:10.414 23:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:10.414 23:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:10.414 23:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:10.414 23:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:10.414 23:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:10.414 23:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:10.414 23:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.414 23:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.672 23:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.672 23:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:10.672 23:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:10.672 23:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:10.930 00:14:10.930 23:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:10.930 23:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:10.930 23:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:11.189 23:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:11.189 23:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:11.189 23:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.189 23:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.189 23:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.189 23:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:11.189 { 00:14:11.189 "cntlid": 81, 00:14:11.189 "qid": 0, 00:14:11.189 "state": "enabled", 00:14:11.189 "thread": "nvmf_tgt_poll_group_000", 00:14:11.189 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a", 00:14:11.189 "listen_address": { 00:14:11.189 "trtype": "TCP", 00:14:11.189 "adrfam": "IPv4", 00:14:11.189 "traddr": "10.0.0.3", 00:14:11.189 "trsvcid": "4420" 00:14:11.189 }, 00:14:11.189 "peer_address": { 00:14:11.189 "trtype": "TCP", 00:14:11.189 "adrfam": "IPv4", 00:14:11.189 "traddr": "10.0.0.1", 00:14:11.189 "trsvcid": "42870" 00:14:11.189 }, 00:14:11.189 "auth": { 00:14:11.189 "state": "completed", 00:14:11.189 "digest": "sha384", 00:14:11.189 "dhgroup": "ffdhe6144" 00:14:11.189 } 00:14:11.189 } 00:14:11.189 ]' 00:14:11.189 23:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:11.447 23:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:11.447 23:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:11.447 23:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:11.447 23:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:11.447 23:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:11.447 23:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:11.447 23:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:11.705 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjhhY2Y4YjY1YTg2M2Y0OGZlZWEyNjdhOTI0ODkzNjA2MmI5NTE0NjUzMmMzNzU5krlUGw==: --dhchap-ctrl-secret DHHC-1:03:NDk1YzM1YTc0OGM3NDJmZjYxY2M0MzI5YWRmODg3NGZlOTFkZDViZWNiNmFjYzE4YmQ1MjRlMjkyOGM1NTNlYvZPfj4=: 00:14:11.705 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --hostid 2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -l 0 --dhchap-secret DHHC-1:00:YjhhY2Y4YjY1YTg2M2Y0OGZlZWEyNjdhOTI0ODkzNjA2MmI5NTE0NjUzMmMzNzU5krlUGw==: --dhchap-ctrl-secret DHHC-1:03:NDk1YzM1YTc0OGM3NDJmZjYxY2M0MzI5YWRmODg3NGZlOTFkZDViZWNiNmFjYzE4YmQ1MjRlMjkyOGM1NTNlYvZPfj4=: 00:14:12.638 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:12.638 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:12.638 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:14:12.638 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.638 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.638 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.638 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:12.638 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:12.638 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:12.638 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:14:12.638 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:12.638 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:12.638 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:12.638 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:12.638 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:12.638 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:12.638 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.638 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.638 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.638 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:12.638 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:12.638 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:13.207 00:14:13.207 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:13.207 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:13.207 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:13.472 23:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:13.472 23:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:13.472 23:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.472 23:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.472 23:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.472 23:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:13.472 { 00:14:13.472 "cntlid": 83, 00:14:13.472 "qid": 0, 00:14:13.472 "state": "enabled", 00:14:13.472 "thread": "nvmf_tgt_poll_group_000", 00:14:13.472 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a", 00:14:13.472 "listen_address": { 00:14:13.472 "trtype": "TCP", 00:14:13.472 "adrfam": "IPv4", 00:14:13.472 "traddr": "10.0.0.3", 00:14:13.472 "trsvcid": "4420" 00:14:13.472 }, 00:14:13.472 "peer_address": { 00:14:13.472 "trtype": "TCP", 00:14:13.472 "adrfam": "IPv4", 00:14:13.472 "traddr": "10.0.0.1", 00:14:13.472 "trsvcid": "36282" 00:14:13.472 }, 00:14:13.472 "auth": { 00:14:13.472 "state": "completed", 00:14:13.472 "digest": "sha384", 00:14:13.472 "dhgroup": "ffdhe6144" 00:14:13.472 } 00:14:13.472 } 00:14:13.472 ]' 00:14:13.472 23:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:13.472 23:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:13.472 23:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:13.733 23:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:13.733 23:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:13.733 23:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:13.733 23:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:13.733 23:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:13.992 23:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzJkZDZmMDQ2ZTEzZGMyNmE5MzFkNDM3ZDNlZmUwYWUQ1o38: --dhchap-ctrl-secret DHHC-1:02:OTM2NWYxOGFhOGM1YWU5MThlMGE2MGY5YWFiMzZjNDdlYjYwZDVlYWFjMzZiMjhiETNs8g==: 00:14:13.992 23:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --hostid 2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -l 0 --dhchap-secret DHHC-1:01:NzJkZDZmMDQ2ZTEzZGMyNmE5MzFkNDM3ZDNlZmUwYWUQ1o38: --dhchap-ctrl-secret DHHC-1:02:OTM2NWYxOGFhOGM1YWU5MThlMGE2MGY5YWFiMzZjNDdlYjYwZDVlYWFjMzZiMjhiETNs8g==: 00:14:14.559 23:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:14.559 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:14.559 23:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:14:14.559 23:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.559 23:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.559 23:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.559 23:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:14.559 23:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:14.559 23:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:14.818 23:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:14:14.818 23:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:14.818 23:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:14.818 23:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:14.818 23:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:14.818 23:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:14.818 23:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:14.818 23:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.818 23:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.818 23:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.818 23:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:14.818 23:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:14.819 23:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:15.078 00:14:15.336 23:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:15.336 23:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:15.336 23:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:15.336 23:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:15.336 23:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:15.336 23:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.336 23:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.595 23:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.595 23:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:15.595 { 00:14:15.595 "cntlid": 85, 00:14:15.595 "qid": 0, 00:14:15.595 "state": "enabled", 00:14:15.595 "thread": "nvmf_tgt_poll_group_000", 00:14:15.595 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a", 00:14:15.595 "listen_address": { 00:14:15.595 "trtype": "TCP", 00:14:15.595 "adrfam": "IPv4", 00:14:15.595 "traddr": "10.0.0.3", 00:14:15.595 "trsvcid": "4420" 00:14:15.595 }, 00:14:15.595 "peer_address": { 00:14:15.595 "trtype": "TCP", 00:14:15.595 "adrfam": "IPv4", 00:14:15.595 "traddr": "10.0.0.1", 00:14:15.595 "trsvcid": "36322" 00:14:15.595 }, 00:14:15.595 "auth": { 00:14:15.595 "state": "completed", 00:14:15.595 "digest": "sha384", 00:14:15.595 "dhgroup": "ffdhe6144" 00:14:15.595 } 00:14:15.595 } 00:14:15.595 ]' 00:14:15.595 23:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:15.595 23:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:15.595 23:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:15.595 23:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:15.595 23:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:15.595 23:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:15.595 23:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:15.595 23:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:15.853 23:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjViMjFjYTk4NGFlNDUyODJkOGQwZjgzNTE3YzFhZjIzZTIyNDA5MDYwOGNjZGJmVA9WuA==: --dhchap-ctrl-secret DHHC-1:01:YTJjYjQ5ZDJmMTYwNWRhNTIyYzc1ODJhYTYzYmQ0ZWHxbK6/: 00:14:15.853 23:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --hostid 2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -l 0 --dhchap-secret DHHC-1:02:YjViMjFjYTk4NGFlNDUyODJkOGQwZjgzNTE3YzFhZjIzZTIyNDA5MDYwOGNjZGJmVA9WuA==: --dhchap-ctrl-secret DHHC-1:01:YTJjYjQ5ZDJmMTYwNWRhNTIyYzc1ODJhYTYzYmQ0ZWHxbK6/: 00:14:16.420 23:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:16.420 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:16.420 23:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:14:16.420 23:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.420 23:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.420 23:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.420 23:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:16.420 23:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:16.420 23:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:16.986 23:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:14:16.986 23:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:16.986 23:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:16.986 23:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:16.986 23:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:16.986 23:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:16.986 23:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --dhchap-key key3 00:14:16.986 23:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.986 23:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.986 23:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.986 23:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:16.986 23:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:16.986 23:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:17.244 00:14:17.244 23:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:17.244 23:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:17.244 23:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:17.503 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:17.503 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:17.503 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.503 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.503 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.503 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:17.503 { 00:14:17.503 "cntlid": 87, 00:14:17.503 "qid": 0, 00:14:17.503 "state": "enabled", 00:14:17.503 "thread": "nvmf_tgt_poll_group_000", 00:14:17.503 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a", 00:14:17.503 "listen_address": { 00:14:17.503 "trtype": "TCP", 00:14:17.503 "adrfam": "IPv4", 00:14:17.503 "traddr": "10.0.0.3", 00:14:17.503 "trsvcid": "4420" 00:14:17.503 }, 00:14:17.503 "peer_address": { 00:14:17.503 "trtype": "TCP", 00:14:17.503 "adrfam": "IPv4", 00:14:17.503 "traddr": "10.0.0.1", 00:14:17.503 "trsvcid": "36348" 00:14:17.503 }, 00:14:17.503 "auth": { 00:14:17.503 "state": "completed", 00:14:17.503 "digest": "sha384", 00:14:17.503 "dhgroup": "ffdhe6144" 00:14:17.503 } 00:14:17.503 } 00:14:17.503 ]' 00:14:17.503 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:17.503 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:17.503 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:17.762 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:17.762 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:17.762 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:17.762 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:17.762 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:18.020 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjUwYzA1MTFjNzhlYTc4NjY1MTA3YzBkMmQzNDhmOThkODM2Yzc2Njk2NzQyNGI2YzMxZGE0MjU3OGQxNTNiMQdxwhM=: 00:14:18.020 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --hostid 2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -l 0 --dhchap-secret DHHC-1:03:NjUwYzA1MTFjNzhlYTc4NjY1MTA3YzBkMmQzNDhmOThkODM2Yzc2Njk2NzQyNGI2YzMxZGE0MjU3OGQxNTNiMQdxwhM=: 00:14:18.588 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:18.846 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:18.846 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:14:18.846 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.846 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.846 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.846 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:18.846 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:18.846 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:18.846 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:19.104 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:14:19.104 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:19.104 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:19.104 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:19.104 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:19.104 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:19.104 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:19.104 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.104 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.104 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.104 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:19.104 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:19.104 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:19.671 00:14:19.671 23:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:19.671 23:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:19.671 23:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:19.929 23:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:19.929 23:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:19.929 23:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.929 23:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.929 23:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.929 23:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:19.929 { 00:14:19.929 "cntlid": 89, 00:14:19.929 "qid": 0, 00:14:19.929 "state": "enabled", 00:14:19.929 "thread": "nvmf_tgt_poll_group_000", 00:14:19.929 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a", 00:14:19.929 "listen_address": { 00:14:19.929 "trtype": "TCP", 00:14:19.929 "adrfam": "IPv4", 00:14:19.929 "traddr": "10.0.0.3", 00:14:19.929 "trsvcid": "4420" 00:14:19.929 }, 00:14:19.929 "peer_address": { 00:14:19.929 "trtype": "TCP", 00:14:19.929 "adrfam": "IPv4", 00:14:19.929 "traddr": "10.0.0.1", 00:14:19.929 "trsvcid": "36380" 00:14:19.929 }, 00:14:19.929 "auth": { 00:14:19.929 "state": "completed", 00:14:19.929 "digest": "sha384", 00:14:19.929 "dhgroup": "ffdhe8192" 00:14:19.929 } 00:14:19.929 } 00:14:19.929 ]' 00:14:19.929 23:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:19.929 23:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:19.929 23:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:19.929 23:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:19.929 23:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:19.929 23:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:19.929 23:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:19.929 23:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:20.497 23:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjhhY2Y4YjY1YTg2M2Y0OGZlZWEyNjdhOTI0ODkzNjA2MmI5NTE0NjUzMmMzNzU5krlUGw==: --dhchap-ctrl-secret DHHC-1:03:NDk1YzM1YTc0OGM3NDJmZjYxY2M0MzI5YWRmODg3NGZlOTFkZDViZWNiNmFjYzE4YmQ1MjRlMjkyOGM1NTNlYvZPfj4=: 00:14:20.497 23:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --hostid 2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -l 0 --dhchap-secret DHHC-1:00:YjhhY2Y4YjY1YTg2M2Y0OGZlZWEyNjdhOTI0ODkzNjA2MmI5NTE0NjUzMmMzNzU5krlUGw==: --dhchap-ctrl-secret DHHC-1:03:NDk1YzM1YTc0OGM3NDJmZjYxY2M0MzI5YWRmODg3NGZlOTFkZDViZWNiNmFjYzE4YmQ1MjRlMjkyOGM1NTNlYvZPfj4=: 00:14:21.064 23:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:21.064 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:21.064 23:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:14:21.064 23:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.064 23:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.064 23:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.064 23:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:21.064 23:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:21.064 23:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:21.323 23:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:14:21.323 23:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:21.323 23:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:21.323 23:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:21.323 23:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:21.323 23:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:21.323 23:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:21.323 23:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.323 23:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.323 23:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.323 23:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:21.323 23:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:21.323 23:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:21.891 00:14:21.891 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:21.891 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:21.891 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:22.149 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:22.149 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:22.149 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.149 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.149 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.149 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:22.149 { 00:14:22.149 "cntlid": 91, 00:14:22.149 "qid": 0, 00:14:22.149 "state": "enabled", 00:14:22.149 "thread": "nvmf_tgt_poll_group_000", 00:14:22.149 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a", 00:14:22.149 "listen_address": { 00:14:22.149 "trtype": "TCP", 00:14:22.149 "adrfam": "IPv4", 00:14:22.149 "traddr": "10.0.0.3", 00:14:22.149 "trsvcid": "4420" 00:14:22.149 }, 00:14:22.149 "peer_address": { 00:14:22.149 "trtype": "TCP", 00:14:22.149 "adrfam": "IPv4", 00:14:22.149 "traddr": "10.0.0.1", 00:14:22.149 "trsvcid": "36406" 00:14:22.149 }, 00:14:22.149 "auth": { 00:14:22.149 "state": "completed", 00:14:22.149 "digest": "sha384", 00:14:22.149 "dhgroup": "ffdhe8192" 00:14:22.149 } 00:14:22.149 } 00:14:22.149 ]' 00:14:22.149 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:22.149 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:22.149 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:22.149 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:22.149 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:22.408 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:22.408 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:22.408 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:22.666 23:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzJkZDZmMDQ2ZTEzZGMyNmE5MzFkNDM3ZDNlZmUwYWUQ1o38: --dhchap-ctrl-secret DHHC-1:02:OTM2NWYxOGFhOGM1YWU5MThlMGE2MGY5YWFiMzZjNDdlYjYwZDVlYWFjMzZiMjhiETNs8g==: 00:14:22.666 23:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --hostid 2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -l 0 --dhchap-secret DHHC-1:01:NzJkZDZmMDQ2ZTEzZGMyNmE5MzFkNDM3ZDNlZmUwYWUQ1o38: --dhchap-ctrl-secret DHHC-1:02:OTM2NWYxOGFhOGM1YWU5MThlMGE2MGY5YWFiMzZjNDdlYjYwZDVlYWFjMzZiMjhiETNs8g==: 00:14:23.233 23:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:23.233 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:23.233 23:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:14:23.233 23:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.233 23:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.233 23:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.233 23:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:23.233 23:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:23.233 23:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:23.492 23:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:14:23.492 23:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:23.492 23:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:23.492 23:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:23.492 23:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:23.492 23:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:23.492 23:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:23.492 23:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.492 23:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.492 23:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.492 23:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:23.492 23:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:23.492 23:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:24.060 00:14:24.319 23:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:24.319 23:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:24.319 23:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:24.579 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:24.579 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:24.579 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.579 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.579 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.579 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:24.579 { 00:14:24.579 "cntlid": 93, 00:14:24.579 "qid": 0, 00:14:24.579 "state": "enabled", 00:14:24.579 "thread": "nvmf_tgt_poll_group_000", 00:14:24.579 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a", 00:14:24.579 "listen_address": { 00:14:24.579 "trtype": "TCP", 00:14:24.579 "adrfam": "IPv4", 00:14:24.579 "traddr": "10.0.0.3", 00:14:24.579 "trsvcid": "4420" 00:14:24.579 }, 00:14:24.579 "peer_address": { 00:14:24.579 "trtype": "TCP", 00:14:24.579 "adrfam": "IPv4", 00:14:24.579 "traddr": "10.0.0.1", 00:14:24.579 "trsvcid": "53204" 00:14:24.579 }, 00:14:24.579 "auth": { 00:14:24.579 "state": "completed", 00:14:24.579 "digest": "sha384", 00:14:24.579 "dhgroup": "ffdhe8192" 00:14:24.579 } 00:14:24.579 } 00:14:24.579 ]' 00:14:24.579 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:24.579 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:24.579 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:24.579 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:24.579 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:24.579 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:24.579 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:24.579 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:24.838 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjViMjFjYTk4NGFlNDUyODJkOGQwZjgzNTE3YzFhZjIzZTIyNDA5MDYwOGNjZGJmVA9WuA==: --dhchap-ctrl-secret DHHC-1:01:YTJjYjQ5ZDJmMTYwNWRhNTIyYzc1ODJhYTYzYmQ0ZWHxbK6/: 00:14:24.838 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --hostid 2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -l 0 --dhchap-secret DHHC-1:02:YjViMjFjYTk4NGFlNDUyODJkOGQwZjgzNTE3YzFhZjIzZTIyNDA5MDYwOGNjZGJmVA9WuA==: --dhchap-ctrl-secret DHHC-1:01:YTJjYjQ5ZDJmMTYwNWRhNTIyYzc1ODJhYTYzYmQ0ZWHxbK6/: 00:14:25.776 23:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:25.776 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:25.776 23:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:14:25.776 23:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.776 23:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.776 23:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.776 23:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:25.776 23:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:25.776 23:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:26.037 23:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:14:26.037 23:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:26.037 23:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:26.037 23:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:26.037 23:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:26.037 23:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:26.037 23:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --dhchap-key key3 00:14:26.037 23:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.037 23:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.037 23:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.037 23:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:26.037 23:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:26.038 23:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:26.604 00:14:26.604 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:26.604 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:26.604 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:26.863 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:26.863 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:26.863 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.863 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.863 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.863 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:26.863 { 00:14:26.863 "cntlid": 95, 00:14:26.863 "qid": 0, 00:14:26.863 "state": "enabled", 00:14:26.863 "thread": "nvmf_tgt_poll_group_000", 00:14:26.863 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a", 00:14:26.863 "listen_address": { 00:14:26.863 "trtype": "TCP", 00:14:26.863 "adrfam": "IPv4", 00:14:26.863 "traddr": "10.0.0.3", 00:14:26.863 "trsvcid": "4420" 00:14:26.863 }, 00:14:26.863 "peer_address": { 00:14:26.863 "trtype": "TCP", 00:14:26.863 "adrfam": "IPv4", 00:14:26.863 "traddr": "10.0.0.1", 00:14:26.863 "trsvcid": "53238" 00:14:26.863 }, 00:14:26.863 "auth": { 00:14:26.863 "state": "completed", 00:14:26.863 "digest": "sha384", 00:14:26.863 "dhgroup": "ffdhe8192" 00:14:26.863 } 00:14:26.863 } 00:14:26.863 ]' 00:14:26.863 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:26.863 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:26.863 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:26.863 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:26.863 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:27.122 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:27.122 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:27.122 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:27.381 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjUwYzA1MTFjNzhlYTc4NjY1MTA3YzBkMmQzNDhmOThkODM2Yzc2Njk2NzQyNGI2YzMxZGE0MjU3OGQxNTNiMQdxwhM=: 00:14:27.381 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --hostid 2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -l 0 --dhchap-secret DHHC-1:03:NjUwYzA1MTFjNzhlYTc4NjY1MTA3YzBkMmQzNDhmOThkODM2Yzc2Njk2NzQyNGI2YzMxZGE0MjU3OGQxNTNiMQdxwhM=: 00:14:27.950 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:27.950 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:27.950 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:14:27.950 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.950 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.950 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.950 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:14:27.950 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:27.950 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:27.950 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:27.950 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:28.210 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:14:28.210 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:28.210 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:28.210 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:28.210 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:28.210 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:28.210 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:28.210 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.210 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.210 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.210 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:28.210 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:28.210 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:28.469 00:14:28.728 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:28.728 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:28.728 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:28.987 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:28.987 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:28.987 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.987 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.987 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.987 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:28.987 { 00:14:28.987 "cntlid": 97, 00:14:28.987 "qid": 0, 00:14:28.987 "state": "enabled", 00:14:28.987 "thread": "nvmf_tgt_poll_group_000", 00:14:28.987 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a", 00:14:28.987 "listen_address": { 00:14:28.987 "trtype": "TCP", 00:14:28.987 "adrfam": "IPv4", 00:14:28.987 "traddr": "10.0.0.3", 00:14:28.987 "trsvcid": "4420" 00:14:28.987 }, 00:14:28.987 "peer_address": { 00:14:28.987 "trtype": "TCP", 00:14:28.987 "adrfam": "IPv4", 00:14:28.987 "traddr": "10.0.0.1", 00:14:28.987 "trsvcid": "53262" 00:14:28.987 }, 00:14:28.987 "auth": { 00:14:28.987 "state": "completed", 00:14:28.987 "digest": "sha512", 00:14:28.987 "dhgroup": "null" 00:14:28.987 } 00:14:28.987 } 00:14:28.987 ]' 00:14:28.987 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:28.987 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:28.987 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:28.987 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:28.987 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:28.987 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:28.987 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:28.987 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:29.247 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjhhY2Y4YjY1YTg2M2Y0OGZlZWEyNjdhOTI0ODkzNjA2MmI5NTE0NjUzMmMzNzU5krlUGw==: --dhchap-ctrl-secret DHHC-1:03:NDk1YzM1YTc0OGM3NDJmZjYxY2M0MzI5YWRmODg3NGZlOTFkZDViZWNiNmFjYzE4YmQ1MjRlMjkyOGM1NTNlYvZPfj4=: 00:14:29.247 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --hostid 2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -l 0 --dhchap-secret DHHC-1:00:YjhhY2Y4YjY1YTg2M2Y0OGZlZWEyNjdhOTI0ODkzNjA2MmI5NTE0NjUzMmMzNzU5krlUGw==: --dhchap-ctrl-secret DHHC-1:03:NDk1YzM1YTc0OGM3NDJmZjYxY2M0MzI5YWRmODg3NGZlOTFkZDViZWNiNmFjYzE4YmQ1MjRlMjkyOGM1NTNlYvZPfj4=: 00:14:29.814 23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:29.814 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:29.814 23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:14:29.814 23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.814 23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.814 23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.814 23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:29.814 23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:29.814 23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:30.072 23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:14:30.072 23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:30.072 23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:30.072 23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:30.072 23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:30.072 23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:30.072 23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:30.072 23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.072 23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.072 23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.072 23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:30.072 23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:30.072 23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:30.330 00:14:30.589 23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:30.589 23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:30.589 23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:30.589 23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:30.589 23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:30.589 23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.589 23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.589 23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.589 23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:30.589 { 00:14:30.589 "cntlid": 99, 00:14:30.589 "qid": 0, 00:14:30.589 "state": "enabled", 00:14:30.589 "thread": "nvmf_tgt_poll_group_000", 00:14:30.589 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a", 00:14:30.589 "listen_address": { 00:14:30.589 "trtype": "TCP", 00:14:30.589 "adrfam": "IPv4", 00:14:30.589 "traddr": "10.0.0.3", 00:14:30.589 "trsvcid": "4420" 00:14:30.589 }, 00:14:30.589 "peer_address": { 00:14:30.589 "trtype": "TCP", 00:14:30.589 "adrfam": "IPv4", 00:14:30.589 "traddr": "10.0.0.1", 00:14:30.589 "trsvcid": "53292" 00:14:30.589 }, 00:14:30.589 "auth": { 00:14:30.589 "state": "completed", 00:14:30.589 "digest": "sha512", 00:14:30.589 "dhgroup": "null" 00:14:30.589 } 00:14:30.589 } 00:14:30.589 ]' 00:14:30.589 23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:30.848 23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:30.848 23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:30.848 23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:30.848 23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:30.848 23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:30.848 23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:30.848 23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:31.107 23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzJkZDZmMDQ2ZTEzZGMyNmE5MzFkNDM3ZDNlZmUwYWUQ1o38: --dhchap-ctrl-secret DHHC-1:02:OTM2NWYxOGFhOGM1YWU5MThlMGE2MGY5YWFiMzZjNDdlYjYwZDVlYWFjMzZiMjhiETNs8g==: 00:14:31.107 23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --hostid 2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -l 0 --dhchap-secret DHHC-1:01:NzJkZDZmMDQ2ZTEzZGMyNmE5MzFkNDM3ZDNlZmUwYWUQ1o38: --dhchap-ctrl-secret DHHC-1:02:OTM2NWYxOGFhOGM1YWU5MThlMGE2MGY5YWFiMzZjNDdlYjYwZDVlYWFjMzZiMjhiETNs8g==: 00:14:31.674 23:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:31.674 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:31.674 23:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:14:31.674 23:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.674 23:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.674 23:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.674 23:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:31.674 23:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:31.674 23:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:31.933 23:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:14:31.933 23:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:31.933 23:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:31.933 23:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:31.933 23:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:31.933 23:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:31.933 23:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:31.933 23:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.933 23:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.933 23:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.933 23:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:31.933 23:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:31.933 23:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:32.192 00:14:32.451 23:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:32.451 23:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:32.451 23:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:32.714 23:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:32.714 23:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:32.714 23:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.714 23:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.714 23:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.714 23:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:32.714 { 00:14:32.714 "cntlid": 101, 00:14:32.714 "qid": 0, 00:14:32.714 "state": "enabled", 00:14:32.714 "thread": "nvmf_tgt_poll_group_000", 00:14:32.714 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a", 00:14:32.714 "listen_address": { 00:14:32.714 "trtype": "TCP", 00:14:32.714 "adrfam": "IPv4", 00:14:32.714 "traddr": "10.0.0.3", 00:14:32.714 "trsvcid": "4420" 00:14:32.714 }, 00:14:32.714 "peer_address": { 00:14:32.714 "trtype": "TCP", 00:14:32.714 "adrfam": "IPv4", 00:14:32.714 "traddr": "10.0.0.1", 00:14:32.714 "trsvcid": "53306" 00:14:32.714 }, 00:14:32.714 "auth": { 00:14:32.714 "state": "completed", 00:14:32.714 "digest": "sha512", 00:14:32.714 "dhgroup": "null" 00:14:32.714 } 00:14:32.714 } 00:14:32.714 ]' 00:14:32.714 23:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:32.714 23:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:32.714 23:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:32.714 23:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:32.714 23:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:32.714 23:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:32.714 23:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:32.714 23:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:32.973 23:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjViMjFjYTk4NGFlNDUyODJkOGQwZjgzNTE3YzFhZjIzZTIyNDA5MDYwOGNjZGJmVA9WuA==: --dhchap-ctrl-secret DHHC-1:01:YTJjYjQ5ZDJmMTYwNWRhNTIyYzc1ODJhYTYzYmQ0ZWHxbK6/: 00:14:32.973 23:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --hostid 2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -l 0 --dhchap-secret DHHC-1:02:YjViMjFjYTk4NGFlNDUyODJkOGQwZjgzNTE3YzFhZjIzZTIyNDA5MDYwOGNjZGJmVA9WuA==: --dhchap-ctrl-secret DHHC-1:01:YTJjYjQ5ZDJmMTYwNWRhNTIyYzc1ODJhYTYzYmQ0ZWHxbK6/: 00:14:33.626 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:33.626 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:33.626 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:14:33.626 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.626 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.626 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.626 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:33.626 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:33.626 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:33.885 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:14:33.885 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:33.885 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:33.885 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:33.885 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:33.885 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:33.885 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --dhchap-key key3 00:14:33.885 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.885 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.885 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.885 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:33.885 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:33.885 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:34.454 00:14:34.454 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:34.454 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:34.454 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:34.454 23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:34.454 23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:34.454 23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.454 23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.454 23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.454 23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:34.454 { 00:14:34.454 "cntlid": 103, 00:14:34.454 "qid": 0, 00:14:34.454 "state": "enabled", 00:14:34.454 "thread": "nvmf_tgt_poll_group_000", 00:14:34.454 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a", 00:14:34.454 "listen_address": { 00:14:34.454 "trtype": "TCP", 00:14:34.454 "adrfam": "IPv4", 00:14:34.454 "traddr": "10.0.0.3", 00:14:34.454 "trsvcid": "4420" 00:14:34.454 }, 00:14:34.454 "peer_address": { 00:14:34.454 "trtype": "TCP", 00:14:34.454 "adrfam": "IPv4", 00:14:34.454 "traddr": "10.0.0.1", 00:14:34.454 "trsvcid": "58192" 00:14:34.454 }, 00:14:34.454 "auth": { 00:14:34.454 "state": "completed", 00:14:34.454 "digest": "sha512", 00:14:34.454 "dhgroup": "null" 00:14:34.454 } 00:14:34.454 } 00:14:34.454 ]' 00:14:34.713 23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:34.713 23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:34.713 23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:34.713 23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:34.713 23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:34.713 23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:34.713 23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:34.713 23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:34.973 23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjUwYzA1MTFjNzhlYTc4NjY1MTA3YzBkMmQzNDhmOThkODM2Yzc2Njk2NzQyNGI2YzMxZGE0MjU3OGQxNTNiMQdxwhM=: 00:14:34.973 23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --hostid 2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -l 0 --dhchap-secret DHHC-1:03:NjUwYzA1MTFjNzhlYTc4NjY1MTA3YzBkMmQzNDhmOThkODM2Yzc2Njk2NzQyNGI2YzMxZGE0MjU3OGQxNTNiMQdxwhM=: 00:14:35.541 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:35.541 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:35.542 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:14:35.542 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.542 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.800 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.800 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:35.800 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:35.800 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:35.800 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:36.060 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:14:36.060 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:36.060 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:36.060 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:36.060 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:36.060 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:36.060 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:36.060 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.060 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.060 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.060 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:36.060 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:36.060 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:36.319 00:14:36.319 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:36.319 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:36.319 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:36.578 23:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:36.578 23:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:36.578 23:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.578 23:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.578 23:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.578 23:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:36.578 { 00:14:36.578 "cntlid": 105, 00:14:36.578 "qid": 0, 00:14:36.578 "state": "enabled", 00:14:36.578 "thread": "nvmf_tgt_poll_group_000", 00:14:36.578 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a", 00:14:36.578 "listen_address": { 00:14:36.578 "trtype": "TCP", 00:14:36.578 "adrfam": "IPv4", 00:14:36.578 "traddr": "10.0.0.3", 00:14:36.578 "trsvcid": "4420" 00:14:36.578 }, 00:14:36.578 "peer_address": { 00:14:36.578 "trtype": "TCP", 00:14:36.578 "adrfam": "IPv4", 00:14:36.578 "traddr": "10.0.0.1", 00:14:36.578 "trsvcid": "58208" 00:14:36.578 }, 00:14:36.578 "auth": { 00:14:36.578 "state": "completed", 00:14:36.578 "digest": "sha512", 00:14:36.578 "dhgroup": "ffdhe2048" 00:14:36.578 } 00:14:36.578 } 00:14:36.578 ]' 00:14:36.578 23:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:36.578 23:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:36.578 23:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:36.578 23:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:36.578 23:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:36.578 23:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:36.578 23:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:36.578 23:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:37.145 23:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjhhY2Y4YjY1YTg2M2Y0OGZlZWEyNjdhOTI0ODkzNjA2MmI5NTE0NjUzMmMzNzU5krlUGw==: --dhchap-ctrl-secret DHHC-1:03:NDk1YzM1YTc0OGM3NDJmZjYxY2M0MzI5YWRmODg3NGZlOTFkZDViZWNiNmFjYzE4YmQ1MjRlMjkyOGM1NTNlYvZPfj4=: 00:14:37.145 23:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --hostid 2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -l 0 --dhchap-secret DHHC-1:00:YjhhY2Y4YjY1YTg2M2Y0OGZlZWEyNjdhOTI0ODkzNjA2MmI5NTE0NjUzMmMzNzU5krlUGw==: --dhchap-ctrl-secret DHHC-1:03:NDk1YzM1YTc0OGM3NDJmZjYxY2M0MzI5YWRmODg3NGZlOTFkZDViZWNiNmFjYzE4YmQ1MjRlMjkyOGM1NTNlYvZPfj4=: 00:14:37.713 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:37.713 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:37.713 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:14:37.713 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.713 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.713 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.713 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:37.713 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:37.713 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:37.973 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:14:37.973 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:37.973 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:37.973 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:37.973 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:37.973 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:37.973 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:37.973 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.973 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.973 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.973 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:37.973 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:37.973 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:38.232 00:14:38.493 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:38.493 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:38.494 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:38.758 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:38.758 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:38.758 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.758 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.758 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.758 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:38.758 { 00:14:38.758 "cntlid": 107, 00:14:38.758 "qid": 0, 00:14:38.758 "state": "enabled", 00:14:38.758 "thread": "nvmf_tgt_poll_group_000", 00:14:38.758 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a", 00:14:38.758 "listen_address": { 00:14:38.758 "trtype": "TCP", 00:14:38.758 "adrfam": "IPv4", 00:14:38.758 "traddr": "10.0.0.3", 00:14:38.758 "trsvcid": "4420" 00:14:38.758 }, 00:14:38.758 "peer_address": { 00:14:38.758 "trtype": "TCP", 00:14:38.758 "adrfam": "IPv4", 00:14:38.758 "traddr": "10.0.0.1", 00:14:38.758 "trsvcid": "58242" 00:14:38.758 }, 00:14:38.758 "auth": { 00:14:38.758 "state": "completed", 00:14:38.758 "digest": "sha512", 00:14:38.758 "dhgroup": "ffdhe2048" 00:14:38.758 } 00:14:38.758 } 00:14:38.758 ]' 00:14:38.758 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:38.758 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:38.758 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:38.758 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:38.758 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:38.758 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:38.758 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:38.758 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:39.018 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzJkZDZmMDQ2ZTEzZGMyNmE5MzFkNDM3ZDNlZmUwYWUQ1o38: --dhchap-ctrl-secret DHHC-1:02:OTM2NWYxOGFhOGM1YWU5MThlMGE2MGY5YWFiMzZjNDdlYjYwZDVlYWFjMzZiMjhiETNs8g==: 00:14:39.018 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --hostid 2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -l 0 --dhchap-secret DHHC-1:01:NzJkZDZmMDQ2ZTEzZGMyNmE5MzFkNDM3ZDNlZmUwYWUQ1o38: --dhchap-ctrl-secret DHHC-1:02:OTM2NWYxOGFhOGM1YWU5MThlMGE2MGY5YWFiMzZjNDdlYjYwZDVlYWFjMzZiMjhiETNs8g==: 00:14:39.586 23:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:39.586 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:39.586 23:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:14:39.586 23:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.586 23:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.586 23:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.586 23:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:39.586 23:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:39.586 23:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:39.846 23:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:14:39.846 23:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:39.846 23:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:39.846 23:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:39.846 23:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:39.846 23:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:39.846 23:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:39.846 23:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.846 23:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.846 23:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.846 23:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:39.846 23:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:40.105 23:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:40.364 00:14:40.364 23:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:40.364 23:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:40.364 23:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:40.622 23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:40.622 23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:40.622 23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.622 23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.623 23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.623 23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:40.623 { 00:14:40.623 "cntlid": 109, 00:14:40.623 "qid": 0, 00:14:40.623 "state": "enabled", 00:14:40.623 "thread": "nvmf_tgt_poll_group_000", 00:14:40.623 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a", 00:14:40.623 "listen_address": { 00:14:40.623 "trtype": "TCP", 00:14:40.623 "adrfam": "IPv4", 00:14:40.623 "traddr": "10.0.0.3", 00:14:40.623 "trsvcid": "4420" 00:14:40.623 }, 00:14:40.623 "peer_address": { 00:14:40.623 "trtype": "TCP", 00:14:40.623 "adrfam": "IPv4", 00:14:40.623 "traddr": "10.0.0.1", 00:14:40.623 "trsvcid": "58260" 00:14:40.623 }, 00:14:40.623 "auth": { 00:14:40.623 "state": "completed", 00:14:40.623 "digest": "sha512", 00:14:40.623 "dhgroup": "ffdhe2048" 00:14:40.623 } 00:14:40.623 } 00:14:40.623 ]' 00:14:40.623 23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:40.623 23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:40.623 23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:40.623 23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:40.623 23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:40.623 23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:40.623 23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:40.623 23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:40.882 23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjViMjFjYTk4NGFlNDUyODJkOGQwZjgzNTE3YzFhZjIzZTIyNDA5MDYwOGNjZGJmVA9WuA==: --dhchap-ctrl-secret DHHC-1:01:YTJjYjQ5ZDJmMTYwNWRhNTIyYzc1ODJhYTYzYmQ0ZWHxbK6/: 00:14:40.882 23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --hostid 2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -l 0 --dhchap-secret DHHC-1:02:YjViMjFjYTk4NGFlNDUyODJkOGQwZjgzNTE3YzFhZjIzZTIyNDA5MDYwOGNjZGJmVA9WuA==: --dhchap-ctrl-secret DHHC-1:01:YTJjYjQ5ZDJmMTYwNWRhNTIyYzc1ODJhYTYzYmQ0ZWHxbK6/: 00:14:41.450 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:41.450 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:41.450 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:14:41.450 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.450 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.450 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.450 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:41.450 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:41.450 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:42.019 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:14:42.019 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:42.019 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:42.019 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:42.019 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:42.019 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:42.019 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --dhchap-key key3 00:14:42.019 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.019 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.019 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.019 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:42.019 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:42.019 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:42.278 00:14:42.278 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:42.278 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:42.278 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:42.537 23:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:42.537 23:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:42.538 23:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.538 23:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.538 23:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.538 23:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:42.538 { 00:14:42.538 "cntlid": 111, 00:14:42.538 "qid": 0, 00:14:42.538 "state": "enabled", 00:14:42.538 "thread": "nvmf_tgt_poll_group_000", 00:14:42.538 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a", 00:14:42.538 "listen_address": { 00:14:42.538 "trtype": "TCP", 00:14:42.538 "adrfam": "IPv4", 00:14:42.538 "traddr": "10.0.0.3", 00:14:42.538 "trsvcid": "4420" 00:14:42.538 }, 00:14:42.538 "peer_address": { 00:14:42.538 "trtype": "TCP", 00:14:42.538 "adrfam": "IPv4", 00:14:42.538 "traddr": "10.0.0.1", 00:14:42.538 "trsvcid": "58280" 00:14:42.538 }, 00:14:42.538 "auth": { 00:14:42.538 "state": "completed", 00:14:42.538 "digest": "sha512", 00:14:42.538 "dhgroup": "ffdhe2048" 00:14:42.538 } 00:14:42.538 } 00:14:42.538 ]' 00:14:42.538 23:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:42.538 23:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:42.538 23:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:42.538 23:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:42.538 23:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:42.538 23:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:42.538 23:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:42.538 23:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:42.796 23:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjUwYzA1MTFjNzhlYTc4NjY1MTA3YzBkMmQzNDhmOThkODM2Yzc2Njk2NzQyNGI2YzMxZGE0MjU3OGQxNTNiMQdxwhM=: 00:14:42.796 23:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --hostid 2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -l 0 --dhchap-secret DHHC-1:03:NjUwYzA1MTFjNzhlYTc4NjY1MTA3YzBkMmQzNDhmOThkODM2Yzc2Njk2NzQyNGI2YzMxZGE0MjU3OGQxNTNiMQdxwhM=: 00:14:43.364 23:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:43.364 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:43.364 23:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:14:43.364 23:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.364 23:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.364 23:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.364 23:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:43.364 23:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:43.364 23:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:43.364 23:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:43.932 23:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:14:43.932 23:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:43.932 23:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:43.932 23:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:43.932 23:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:43.932 23:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:43.932 23:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:43.932 23:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.932 23:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.932 23:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.932 23:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:43.932 23:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:43.932 23:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:44.192 00:14:44.192 23:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:44.192 23:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:44.192 23:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:44.451 23:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:44.451 23:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:44.451 23:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.451 23:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.451 23:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.451 23:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:44.451 { 00:14:44.451 "cntlid": 113, 00:14:44.451 "qid": 0, 00:14:44.451 "state": "enabled", 00:14:44.451 "thread": "nvmf_tgt_poll_group_000", 00:14:44.451 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a", 00:14:44.451 "listen_address": { 00:14:44.451 "trtype": "TCP", 00:14:44.451 "adrfam": "IPv4", 00:14:44.451 "traddr": "10.0.0.3", 00:14:44.451 "trsvcid": "4420" 00:14:44.451 }, 00:14:44.451 "peer_address": { 00:14:44.451 "trtype": "TCP", 00:14:44.451 "adrfam": "IPv4", 00:14:44.451 "traddr": "10.0.0.1", 00:14:44.451 "trsvcid": "39742" 00:14:44.451 }, 00:14:44.451 "auth": { 00:14:44.451 "state": "completed", 00:14:44.451 "digest": "sha512", 00:14:44.451 "dhgroup": "ffdhe3072" 00:14:44.451 } 00:14:44.451 } 00:14:44.451 ]' 00:14:44.451 23:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:44.451 23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:44.451 23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:44.451 23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:44.451 23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:44.451 23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:44.451 23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:44.451 23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:44.711 23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjhhY2Y4YjY1YTg2M2Y0OGZlZWEyNjdhOTI0ODkzNjA2MmI5NTE0NjUzMmMzNzU5krlUGw==: --dhchap-ctrl-secret DHHC-1:03:NDk1YzM1YTc0OGM3NDJmZjYxY2M0MzI5YWRmODg3NGZlOTFkZDViZWNiNmFjYzE4YmQ1MjRlMjkyOGM1NTNlYvZPfj4=: 00:14:44.711 23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --hostid 2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -l 0 --dhchap-secret DHHC-1:00:YjhhY2Y4YjY1YTg2M2Y0OGZlZWEyNjdhOTI0ODkzNjA2MmI5NTE0NjUzMmMzNzU5krlUGw==: --dhchap-ctrl-secret DHHC-1:03:NDk1YzM1YTc0OGM3NDJmZjYxY2M0MzI5YWRmODg3NGZlOTFkZDViZWNiNmFjYzE4YmQ1MjRlMjkyOGM1NTNlYvZPfj4=: 00:14:45.648 23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:45.648 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:45.648 23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:14:45.648 23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.648 23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.648 23:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.648 23:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:45.648 23:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:45.648 23:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:45.648 23:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:14:45.648 23:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:45.648 23:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:45.648 23:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:45.648 23:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:45.648 23:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:45.648 23:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:45.648 23:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.648 23:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.648 23:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.648 23:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:45.648 23:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:45.648 23:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:46.216 00:14:46.216 23:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:46.216 23:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:46.216 23:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:46.475 23:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:46.475 23:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:46.475 23:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.475 23:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.475 23:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.475 23:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:46.475 { 00:14:46.475 "cntlid": 115, 00:14:46.475 "qid": 0, 00:14:46.475 "state": "enabled", 00:14:46.475 "thread": "nvmf_tgt_poll_group_000", 00:14:46.475 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a", 00:14:46.475 "listen_address": { 00:14:46.475 "trtype": "TCP", 00:14:46.475 "adrfam": "IPv4", 00:14:46.475 "traddr": "10.0.0.3", 00:14:46.475 "trsvcid": "4420" 00:14:46.475 }, 00:14:46.475 "peer_address": { 00:14:46.475 "trtype": "TCP", 00:14:46.475 "adrfam": "IPv4", 00:14:46.475 "traddr": "10.0.0.1", 00:14:46.475 "trsvcid": "39772" 00:14:46.475 }, 00:14:46.475 "auth": { 00:14:46.475 "state": "completed", 00:14:46.475 "digest": "sha512", 00:14:46.475 "dhgroup": "ffdhe3072" 00:14:46.475 } 00:14:46.475 } 00:14:46.475 ]' 00:14:46.475 23:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:46.475 23:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:46.475 23:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:46.475 23:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:46.475 23:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:46.475 23:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:46.475 23:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:46.475 23:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:47.041 23:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzJkZDZmMDQ2ZTEzZGMyNmE5MzFkNDM3ZDNlZmUwYWUQ1o38: --dhchap-ctrl-secret DHHC-1:02:OTM2NWYxOGFhOGM1YWU5MThlMGE2MGY5YWFiMzZjNDdlYjYwZDVlYWFjMzZiMjhiETNs8g==: 00:14:47.041 23:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --hostid 2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -l 0 --dhchap-secret DHHC-1:01:NzJkZDZmMDQ2ZTEzZGMyNmE5MzFkNDM3ZDNlZmUwYWUQ1o38: --dhchap-ctrl-secret DHHC-1:02:OTM2NWYxOGFhOGM1YWU5MThlMGE2MGY5YWFiMzZjNDdlYjYwZDVlYWFjMzZiMjhiETNs8g==: 00:14:47.612 23:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:47.612 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:47.612 23:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:14:47.612 23:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.612 23:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.612 23:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.612 23:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:47.612 23:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:47.612 23:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:47.871 23:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:14:47.871 23:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:47.871 23:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:47.871 23:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:47.871 23:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:47.871 23:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:47.871 23:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:47.872 23:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.872 23:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.872 23:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.872 23:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:47.872 23:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:47.872 23:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:48.131 00:14:48.131 23:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:48.131 23:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:48.131 23:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:48.390 23:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:48.390 23:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:48.390 23:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.390 23:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.390 23:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.390 23:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:48.390 { 00:14:48.390 "cntlid": 117, 00:14:48.390 "qid": 0, 00:14:48.390 "state": "enabled", 00:14:48.390 "thread": "nvmf_tgt_poll_group_000", 00:14:48.390 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a", 00:14:48.390 "listen_address": { 00:14:48.390 "trtype": "TCP", 00:14:48.390 "adrfam": "IPv4", 00:14:48.390 "traddr": "10.0.0.3", 00:14:48.390 "trsvcid": "4420" 00:14:48.390 }, 00:14:48.390 "peer_address": { 00:14:48.390 "trtype": "TCP", 00:14:48.390 "adrfam": "IPv4", 00:14:48.390 "traddr": "10.0.0.1", 00:14:48.390 "trsvcid": "39792" 00:14:48.390 }, 00:14:48.390 "auth": { 00:14:48.390 "state": "completed", 00:14:48.390 "digest": "sha512", 00:14:48.390 "dhgroup": "ffdhe3072" 00:14:48.390 } 00:14:48.390 } 00:14:48.390 ]' 00:14:48.390 23:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:48.390 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:48.390 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:48.390 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:48.390 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:48.649 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:48.649 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:48.649 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:48.908 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjViMjFjYTk4NGFlNDUyODJkOGQwZjgzNTE3YzFhZjIzZTIyNDA5MDYwOGNjZGJmVA9WuA==: --dhchap-ctrl-secret DHHC-1:01:YTJjYjQ5ZDJmMTYwNWRhNTIyYzc1ODJhYTYzYmQ0ZWHxbK6/: 00:14:48.908 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --hostid 2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -l 0 --dhchap-secret DHHC-1:02:YjViMjFjYTk4NGFlNDUyODJkOGQwZjgzNTE3YzFhZjIzZTIyNDA5MDYwOGNjZGJmVA9WuA==: --dhchap-ctrl-secret DHHC-1:01:YTJjYjQ5ZDJmMTYwNWRhNTIyYzc1ODJhYTYzYmQ0ZWHxbK6/: 00:14:49.476 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:49.476 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:49.476 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:14:49.476 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.476 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.476 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.476 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:49.476 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:49.477 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:49.736 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:14:49.736 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:49.736 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:49.736 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:49.736 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:49.736 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:49.736 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --dhchap-key key3 00:14:49.736 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.736 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.736 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.736 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:49.736 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:49.736 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:50.304 00:14:50.304 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:50.304 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:50.304 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:50.564 23:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:50.564 23:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:50.564 23:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.564 23:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.564 23:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.564 23:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:50.564 { 00:14:50.564 "cntlid": 119, 00:14:50.564 "qid": 0, 00:14:50.564 "state": "enabled", 00:14:50.564 "thread": "nvmf_tgt_poll_group_000", 00:14:50.564 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a", 00:14:50.564 "listen_address": { 00:14:50.564 "trtype": "TCP", 00:14:50.564 "adrfam": "IPv4", 00:14:50.564 "traddr": "10.0.0.3", 00:14:50.564 "trsvcid": "4420" 00:14:50.564 }, 00:14:50.564 "peer_address": { 00:14:50.564 "trtype": "TCP", 00:14:50.564 "adrfam": "IPv4", 00:14:50.564 "traddr": "10.0.0.1", 00:14:50.564 "trsvcid": "39820" 00:14:50.564 }, 00:14:50.564 "auth": { 00:14:50.564 "state": "completed", 00:14:50.564 "digest": "sha512", 00:14:50.564 "dhgroup": "ffdhe3072" 00:14:50.564 } 00:14:50.564 } 00:14:50.564 ]' 00:14:50.564 23:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:50.564 23:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:50.564 23:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:50.564 23:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:50.564 23:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:50.564 23:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:50.564 23:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:50.564 23:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:51.133 23:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjUwYzA1MTFjNzhlYTc4NjY1MTA3YzBkMmQzNDhmOThkODM2Yzc2Njk2NzQyNGI2YzMxZGE0MjU3OGQxNTNiMQdxwhM=: 00:14:51.133 23:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --hostid 2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -l 0 --dhchap-secret DHHC-1:03:NjUwYzA1MTFjNzhlYTc4NjY1MTA3YzBkMmQzNDhmOThkODM2Yzc2Njk2NzQyNGI2YzMxZGE0MjU3OGQxNTNiMQdxwhM=: 00:14:51.701 23:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:51.701 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:51.701 23:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:14:51.701 23:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.701 23:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.701 23:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.701 23:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:51.701 23:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:51.702 23:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:51.702 23:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:51.960 23:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:14:51.960 23:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:51.960 23:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:51.960 23:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:51.960 23:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:51.960 23:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:51.960 23:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:51.960 23:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.960 23:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.960 23:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.960 23:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:51.960 23:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:51.961 23:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:52.219 00:14:52.219 23:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:52.219 23:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:52.219 23:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:52.479 23:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:52.479 23:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:52.479 23:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.479 23:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.479 23:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.479 23:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:52.479 { 00:14:52.479 "cntlid": 121, 00:14:52.479 "qid": 0, 00:14:52.479 "state": "enabled", 00:14:52.479 "thread": "nvmf_tgt_poll_group_000", 00:14:52.479 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a", 00:14:52.479 "listen_address": { 00:14:52.479 "trtype": "TCP", 00:14:52.479 "adrfam": "IPv4", 00:14:52.479 "traddr": "10.0.0.3", 00:14:52.479 "trsvcid": "4420" 00:14:52.479 }, 00:14:52.479 "peer_address": { 00:14:52.479 "trtype": "TCP", 00:14:52.479 "adrfam": "IPv4", 00:14:52.479 "traddr": "10.0.0.1", 00:14:52.479 "trsvcid": "39850" 00:14:52.479 }, 00:14:52.479 "auth": { 00:14:52.479 "state": "completed", 00:14:52.479 "digest": "sha512", 00:14:52.479 "dhgroup": "ffdhe4096" 00:14:52.479 } 00:14:52.479 } 00:14:52.479 ]' 00:14:52.479 23:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:52.737 23:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:52.737 23:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:52.737 23:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:52.737 23:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:52.737 23:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:52.737 23:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:52.737 23:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:53.001 23:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjhhY2Y4YjY1YTg2M2Y0OGZlZWEyNjdhOTI0ODkzNjA2MmI5NTE0NjUzMmMzNzU5krlUGw==: --dhchap-ctrl-secret DHHC-1:03:NDk1YzM1YTc0OGM3NDJmZjYxY2M0MzI5YWRmODg3NGZlOTFkZDViZWNiNmFjYzE4YmQ1MjRlMjkyOGM1NTNlYvZPfj4=: 00:14:53.001 23:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --hostid 2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -l 0 --dhchap-secret DHHC-1:00:YjhhY2Y4YjY1YTg2M2Y0OGZlZWEyNjdhOTI0ODkzNjA2MmI5NTE0NjUzMmMzNzU5krlUGw==: --dhchap-ctrl-secret DHHC-1:03:NDk1YzM1YTc0OGM3NDJmZjYxY2M0MzI5YWRmODg3NGZlOTFkZDViZWNiNmFjYzE4YmQ1MjRlMjkyOGM1NTNlYvZPfj4=: 00:14:53.600 23:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:53.600 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:53.600 23:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:14:53.600 23:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.600 23:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.600 23:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.600 23:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:53.600 23:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:53.600 23:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:54.168 23:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:14:54.168 23:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:54.168 23:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:54.168 23:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:54.168 23:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:54.168 23:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:54.168 23:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:54.168 23:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.168 23:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.168 23:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.168 23:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:54.168 23:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:54.168 23:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:54.427 00:14:54.427 23:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:54.427 23:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:54.427 23:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:54.686 23:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:54.686 23:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:54.686 23:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.686 23:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.686 23:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.686 23:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:54.686 { 00:14:54.686 "cntlid": 123, 00:14:54.686 "qid": 0, 00:14:54.686 "state": "enabled", 00:14:54.686 "thread": "nvmf_tgt_poll_group_000", 00:14:54.686 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a", 00:14:54.686 "listen_address": { 00:14:54.686 "trtype": "TCP", 00:14:54.686 "adrfam": "IPv4", 00:14:54.686 "traddr": "10.0.0.3", 00:14:54.686 "trsvcid": "4420" 00:14:54.686 }, 00:14:54.686 "peer_address": { 00:14:54.686 "trtype": "TCP", 00:14:54.686 "adrfam": "IPv4", 00:14:54.686 "traddr": "10.0.0.1", 00:14:54.686 "trsvcid": "46638" 00:14:54.686 }, 00:14:54.686 "auth": { 00:14:54.686 "state": "completed", 00:14:54.686 "digest": "sha512", 00:14:54.686 "dhgroup": "ffdhe4096" 00:14:54.686 } 00:14:54.686 } 00:14:54.686 ]' 00:14:54.686 23:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:54.686 23:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:54.686 23:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:54.945 23:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:54.945 23:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:54.945 23:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:54.945 23:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:54.945 23:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:55.204 23:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzJkZDZmMDQ2ZTEzZGMyNmE5MzFkNDM3ZDNlZmUwYWUQ1o38: --dhchap-ctrl-secret DHHC-1:02:OTM2NWYxOGFhOGM1YWU5MThlMGE2MGY5YWFiMzZjNDdlYjYwZDVlYWFjMzZiMjhiETNs8g==: 00:14:55.204 23:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --hostid 2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -l 0 --dhchap-secret DHHC-1:01:NzJkZDZmMDQ2ZTEzZGMyNmE5MzFkNDM3ZDNlZmUwYWUQ1o38: --dhchap-ctrl-secret DHHC-1:02:OTM2NWYxOGFhOGM1YWU5MThlMGE2MGY5YWFiMzZjNDdlYjYwZDVlYWFjMzZiMjhiETNs8g==: 00:14:56.141 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:56.141 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:56.141 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:14:56.141 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.141 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.141 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.141 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:56.141 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:56.141 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:56.141 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:14:56.141 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:56.141 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:56.141 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:56.141 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:56.142 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:56.142 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:56.142 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.142 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.142 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.142 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:56.142 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:56.142 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:56.710 00:14:56.710 23:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:56.710 23:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:56.710 23:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:56.969 23:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:56.969 23:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:56.969 23:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.969 23:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.969 23:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.969 23:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:56.969 { 00:14:56.969 "cntlid": 125, 00:14:56.969 "qid": 0, 00:14:56.969 "state": "enabled", 00:14:56.969 "thread": "nvmf_tgt_poll_group_000", 00:14:56.969 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a", 00:14:56.969 "listen_address": { 00:14:56.969 "trtype": "TCP", 00:14:56.969 "adrfam": "IPv4", 00:14:56.969 "traddr": "10.0.0.3", 00:14:56.969 "trsvcid": "4420" 00:14:56.969 }, 00:14:56.969 "peer_address": { 00:14:56.969 "trtype": "TCP", 00:14:56.969 "adrfam": "IPv4", 00:14:56.969 "traddr": "10.0.0.1", 00:14:56.969 "trsvcid": "46670" 00:14:56.969 }, 00:14:56.969 "auth": { 00:14:56.969 "state": "completed", 00:14:56.969 "digest": "sha512", 00:14:56.969 "dhgroup": "ffdhe4096" 00:14:56.969 } 00:14:56.969 } 00:14:56.969 ]' 00:14:56.969 23:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:56.969 23:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:56.969 23:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:56.969 23:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:56.969 23:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:56.969 23:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:56.969 23:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:56.969 23:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:57.229 23:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjViMjFjYTk4NGFlNDUyODJkOGQwZjgzNTE3YzFhZjIzZTIyNDA5MDYwOGNjZGJmVA9WuA==: --dhchap-ctrl-secret DHHC-1:01:YTJjYjQ5ZDJmMTYwNWRhNTIyYzc1ODJhYTYzYmQ0ZWHxbK6/: 00:14:57.229 23:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --hostid 2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -l 0 --dhchap-secret DHHC-1:02:YjViMjFjYTk4NGFlNDUyODJkOGQwZjgzNTE3YzFhZjIzZTIyNDA5MDYwOGNjZGJmVA9WuA==: --dhchap-ctrl-secret DHHC-1:01:YTJjYjQ5ZDJmMTYwNWRhNTIyYzc1ODJhYTYzYmQ0ZWHxbK6/: 00:14:58.166 23:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:58.166 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:58.166 23:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:14:58.166 23:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.166 23:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.166 23:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.166 23:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:58.166 23:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:58.166 23:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:58.425 23:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:14:58.425 23:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:58.425 23:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:58.425 23:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:58.425 23:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:58.425 23:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:58.425 23:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --dhchap-key key3 00:14:58.425 23:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.425 23:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.425 23:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.425 23:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:58.425 23:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:58.425 23:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:58.684 00:14:58.684 23:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:58.684 23:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:58.684 23:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:58.943 23:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:58.943 23:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:58.943 23:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.943 23:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.943 23:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.943 23:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:58.943 { 00:14:58.943 "cntlid": 127, 00:14:58.943 "qid": 0, 00:14:58.943 "state": "enabled", 00:14:58.943 "thread": "nvmf_tgt_poll_group_000", 00:14:58.943 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a", 00:14:58.943 "listen_address": { 00:14:58.943 "trtype": "TCP", 00:14:58.943 "adrfam": "IPv4", 00:14:58.943 "traddr": "10.0.0.3", 00:14:58.943 "trsvcid": "4420" 00:14:58.943 }, 00:14:58.943 "peer_address": { 00:14:58.943 "trtype": "TCP", 00:14:58.943 "adrfam": "IPv4", 00:14:58.943 "traddr": "10.0.0.1", 00:14:58.943 "trsvcid": "46698" 00:14:58.943 }, 00:14:58.943 "auth": { 00:14:58.943 "state": "completed", 00:14:58.943 "digest": "sha512", 00:14:58.943 "dhgroup": "ffdhe4096" 00:14:58.943 } 00:14:58.943 } 00:14:58.943 ]' 00:14:58.943 23:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:58.943 23:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:58.943 23:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:59.202 23:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:59.202 23:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:59.202 23:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:59.202 23:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:59.202 23:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:59.461 23:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjUwYzA1MTFjNzhlYTc4NjY1MTA3YzBkMmQzNDhmOThkODM2Yzc2Njk2NzQyNGI2YzMxZGE0MjU3OGQxNTNiMQdxwhM=: 00:14:59.461 23:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --hostid 2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -l 0 --dhchap-secret DHHC-1:03:NjUwYzA1MTFjNzhlYTc4NjY1MTA3YzBkMmQzNDhmOThkODM2Yzc2Njk2NzQyNGI2YzMxZGE0MjU3OGQxNTNiMQdxwhM=: 00:15:00.031 23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:00.031 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:00.031 23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:15:00.031 23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.031 23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.031 23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.031 23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:00.031 23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:00.031 23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:00.031 23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:00.290 23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:15:00.290 23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:00.290 23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:00.290 23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:00.290 23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:00.290 23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:00.291 23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:00.291 23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.291 23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.291 23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.291 23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:00.291 23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:00.291 23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:00.858 00:15:00.858 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:00.858 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:00.858 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:01.117 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:01.117 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:01.117 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.117 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.117 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.117 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:01.117 { 00:15:01.117 "cntlid": 129, 00:15:01.117 "qid": 0, 00:15:01.117 "state": "enabled", 00:15:01.117 "thread": "nvmf_tgt_poll_group_000", 00:15:01.117 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a", 00:15:01.117 "listen_address": { 00:15:01.117 "trtype": "TCP", 00:15:01.117 "adrfam": "IPv4", 00:15:01.117 "traddr": "10.0.0.3", 00:15:01.117 "trsvcid": "4420" 00:15:01.117 }, 00:15:01.117 "peer_address": { 00:15:01.117 "trtype": "TCP", 00:15:01.117 "adrfam": "IPv4", 00:15:01.117 "traddr": "10.0.0.1", 00:15:01.117 "trsvcid": "46728" 00:15:01.117 }, 00:15:01.117 "auth": { 00:15:01.117 "state": "completed", 00:15:01.117 "digest": "sha512", 00:15:01.117 "dhgroup": "ffdhe6144" 00:15:01.118 } 00:15:01.118 } 00:15:01.118 ]' 00:15:01.118 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:01.118 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:01.118 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:01.118 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:01.118 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:01.376 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:01.376 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:01.376 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:01.376 23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjhhY2Y4YjY1YTg2M2Y0OGZlZWEyNjdhOTI0ODkzNjA2MmI5NTE0NjUzMmMzNzU5krlUGw==: --dhchap-ctrl-secret DHHC-1:03:NDk1YzM1YTc0OGM3NDJmZjYxY2M0MzI5YWRmODg3NGZlOTFkZDViZWNiNmFjYzE4YmQ1MjRlMjkyOGM1NTNlYvZPfj4=: 00:15:01.376 23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --hostid 2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -l 0 --dhchap-secret DHHC-1:00:YjhhY2Y4YjY1YTg2M2Y0OGZlZWEyNjdhOTI0ODkzNjA2MmI5NTE0NjUzMmMzNzU5krlUGw==: --dhchap-ctrl-secret DHHC-1:03:NDk1YzM1YTc0OGM3NDJmZjYxY2M0MzI5YWRmODg3NGZlOTFkZDViZWNiNmFjYzE4YmQ1MjRlMjkyOGM1NTNlYvZPfj4=: 00:15:01.945 23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:02.204 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:02.204 23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:15:02.204 23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.204 23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.204 23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.204 23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:02.204 23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:02.204 23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:02.464 23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:15:02.464 23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:02.464 23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:02.464 23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:02.464 23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:02.464 23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:02.464 23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:02.464 23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.464 23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.464 23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.464 23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:02.464 23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:02.464 23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:02.721 00:15:02.721 23:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:02.721 23:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:02.721 23:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:02.981 23:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:02.981 23:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:02.981 23:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.981 23:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.981 23:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.981 23:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:02.981 { 00:15:02.981 "cntlid": 131, 00:15:02.981 "qid": 0, 00:15:02.981 "state": "enabled", 00:15:02.981 "thread": "nvmf_tgt_poll_group_000", 00:15:02.981 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a", 00:15:02.981 "listen_address": { 00:15:02.981 "trtype": "TCP", 00:15:02.981 "adrfam": "IPv4", 00:15:02.981 "traddr": "10.0.0.3", 00:15:02.981 "trsvcid": "4420" 00:15:02.981 }, 00:15:02.981 "peer_address": { 00:15:02.981 "trtype": "TCP", 00:15:02.981 "adrfam": "IPv4", 00:15:02.981 "traddr": "10.0.0.1", 00:15:02.981 "trsvcid": "46748" 00:15:02.981 }, 00:15:02.981 "auth": { 00:15:02.981 "state": "completed", 00:15:02.981 "digest": "sha512", 00:15:02.981 "dhgroup": "ffdhe6144" 00:15:02.981 } 00:15:02.981 } 00:15:02.981 ]' 00:15:02.981 23:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:03.240 23:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:03.240 23:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:03.241 23:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:03.241 23:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:03.241 23:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:03.241 23:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:03.241 23:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:03.500 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzJkZDZmMDQ2ZTEzZGMyNmE5MzFkNDM3ZDNlZmUwYWUQ1o38: --dhchap-ctrl-secret DHHC-1:02:OTM2NWYxOGFhOGM1YWU5MThlMGE2MGY5YWFiMzZjNDdlYjYwZDVlYWFjMzZiMjhiETNs8g==: 00:15:03.500 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --hostid 2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -l 0 --dhchap-secret DHHC-1:01:NzJkZDZmMDQ2ZTEzZGMyNmE5MzFkNDM3ZDNlZmUwYWUQ1o38: --dhchap-ctrl-secret DHHC-1:02:OTM2NWYxOGFhOGM1YWU5MThlMGE2MGY5YWFiMzZjNDdlYjYwZDVlYWFjMzZiMjhiETNs8g==: 00:15:04.438 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:04.438 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:04.438 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:15:04.438 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.438 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.438 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.438 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:04.438 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:04.438 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:04.438 23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:15:04.438 23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:04.438 23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:04.438 23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:04.438 23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:04.438 23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:04.438 23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:04.438 23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.438 23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.438 23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.438 23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:04.438 23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:04.438 23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:05.007 00:15:05.007 23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:05.007 23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:05.007 23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:05.266 23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:05.266 23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:05.266 23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.266 23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.266 23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.266 23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:05.266 { 00:15:05.266 "cntlid": 133, 00:15:05.266 "qid": 0, 00:15:05.266 "state": "enabled", 00:15:05.266 "thread": "nvmf_tgt_poll_group_000", 00:15:05.266 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a", 00:15:05.266 "listen_address": { 00:15:05.266 "trtype": "TCP", 00:15:05.266 "adrfam": "IPv4", 00:15:05.267 "traddr": "10.0.0.3", 00:15:05.267 "trsvcid": "4420" 00:15:05.267 }, 00:15:05.267 "peer_address": { 00:15:05.267 "trtype": "TCP", 00:15:05.267 "adrfam": "IPv4", 00:15:05.267 "traddr": "10.0.0.1", 00:15:05.267 "trsvcid": "38230" 00:15:05.267 }, 00:15:05.267 "auth": { 00:15:05.267 "state": "completed", 00:15:05.267 "digest": "sha512", 00:15:05.267 "dhgroup": "ffdhe6144" 00:15:05.267 } 00:15:05.267 } 00:15:05.267 ]' 00:15:05.267 23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:05.267 23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:05.267 23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:05.267 23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:05.267 23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:05.527 23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:05.527 23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:05.527 23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:05.796 23:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjViMjFjYTk4NGFlNDUyODJkOGQwZjgzNTE3YzFhZjIzZTIyNDA5MDYwOGNjZGJmVA9WuA==: --dhchap-ctrl-secret DHHC-1:01:YTJjYjQ5ZDJmMTYwNWRhNTIyYzc1ODJhYTYzYmQ0ZWHxbK6/: 00:15:05.796 23:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --hostid 2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -l 0 --dhchap-secret DHHC-1:02:YjViMjFjYTk4NGFlNDUyODJkOGQwZjgzNTE3YzFhZjIzZTIyNDA5MDYwOGNjZGJmVA9WuA==: --dhchap-ctrl-secret DHHC-1:01:YTJjYjQ5ZDJmMTYwNWRhNTIyYzc1ODJhYTYzYmQ0ZWHxbK6/: 00:15:06.364 23:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:06.364 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:06.364 23:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:15:06.364 23:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.364 23:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.364 23:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.364 23:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:06.364 23:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:06.364 23:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:06.623 23:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:15:06.623 23:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:06.623 23:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:06.623 23:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:06.623 23:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:06.623 23:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:06.623 23:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --dhchap-key key3 00:15:06.623 23:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.623 23:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.623 23:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.623 23:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:06.623 23:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:06.623 23:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:07.191 00:15:07.191 23:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:07.191 23:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:07.191 23:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:07.450 23:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:07.450 23:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:07.450 23:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.450 23:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.450 23:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.450 23:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:07.450 { 00:15:07.450 "cntlid": 135, 00:15:07.450 "qid": 0, 00:15:07.450 "state": "enabled", 00:15:07.450 "thread": "nvmf_tgt_poll_group_000", 00:15:07.450 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a", 00:15:07.450 "listen_address": { 00:15:07.450 "trtype": "TCP", 00:15:07.450 "adrfam": "IPv4", 00:15:07.450 "traddr": "10.0.0.3", 00:15:07.450 "trsvcid": "4420" 00:15:07.450 }, 00:15:07.450 "peer_address": { 00:15:07.450 "trtype": "TCP", 00:15:07.450 "adrfam": "IPv4", 00:15:07.450 "traddr": "10.0.0.1", 00:15:07.450 "trsvcid": "38268" 00:15:07.450 }, 00:15:07.450 "auth": { 00:15:07.450 "state": "completed", 00:15:07.450 "digest": "sha512", 00:15:07.450 "dhgroup": "ffdhe6144" 00:15:07.450 } 00:15:07.450 } 00:15:07.450 ]' 00:15:07.450 23:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:07.450 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:07.450 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:07.450 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:07.450 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:07.450 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:07.450 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:07.450 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:08.017 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjUwYzA1MTFjNzhlYTc4NjY1MTA3YzBkMmQzNDhmOThkODM2Yzc2Njk2NzQyNGI2YzMxZGE0MjU3OGQxNTNiMQdxwhM=: 00:15:08.017 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --hostid 2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -l 0 --dhchap-secret DHHC-1:03:NjUwYzA1MTFjNzhlYTc4NjY1MTA3YzBkMmQzNDhmOThkODM2Yzc2Njk2NzQyNGI2YzMxZGE0MjU3OGQxNTNiMQdxwhM=: 00:15:08.592 23:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:08.592 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:08.592 23:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:15:08.592 23:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.592 23:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.592 23:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.592 23:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:08.592 23:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:08.592 23:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:08.592 23:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:08.852 23:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:15:08.852 23:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:08.852 23:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:08.852 23:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:08.852 23:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:08.852 23:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:08.852 23:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:08.852 23:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.852 23:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.852 23:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.852 23:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:08.852 23:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:08.852 23:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:09.420 00:15:09.420 23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:09.420 23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:09.420 23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:09.680 23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:09.680 23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:09.680 23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.680 23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.680 23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.680 23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:09.680 { 00:15:09.680 "cntlid": 137, 00:15:09.680 "qid": 0, 00:15:09.680 "state": "enabled", 00:15:09.680 "thread": "nvmf_tgt_poll_group_000", 00:15:09.680 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a", 00:15:09.680 "listen_address": { 00:15:09.680 "trtype": "TCP", 00:15:09.680 "adrfam": "IPv4", 00:15:09.680 "traddr": "10.0.0.3", 00:15:09.680 "trsvcid": "4420" 00:15:09.680 }, 00:15:09.680 "peer_address": { 00:15:09.680 "trtype": "TCP", 00:15:09.680 "adrfam": "IPv4", 00:15:09.680 "traddr": "10.0.0.1", 00:15:09.680 "trsvcid": "38306" 00:15:09.680 }, 00:15:09.680 "auth": { 00:15:09.680 "state": "completed", 00:15:09.680 "digest": "sha512", 00:15:09.680 "dhgroup": "ffdhe8192" 00:15:09.680 } 00:15:09.680 } 00:15:09.680 ]' 00:15:09.680 23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:09.680 23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:09.680 23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:09.939 23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:09.939 23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:09.939 23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:09.939 23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:09.939 23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:10.198 23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjhhY2Y4YjY1YTg2M2Y0OGZlZWEyNjdhOTI0ODkzNjA2MmI5NTE0NjUzMmMzNzU5krlUGw==: --dhchap-ctrl-secret DHHC-1:03:NDk1YzM1YTc0OGM3NDJmZjYxY2M0MzI5YWRmODg3NGZlOTFkZDViZWNiNmFjYzE4YmQ1MjRlMjkyOGM1NTNlYvZPfj4=: 00:15:10.198 23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --hostid 2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -l 0 --dhchap-secret DHHC-1:00:YjhhY2Y4YjY1YTg2M2Y0OGZlZWEyNjdhOTI0ODkzNjA2MmI5NTE0NjUzMmMzNzU5krlUGw==: --dhchap-ctrl-secret DHHC-1:03:NDk1YzM1YTc0OGM3NDJmZjYxY2M0MzI5YWRmODg3NGZlOTFkZDViZWNiNmFjYzE4YmQ1MjRlMjkyOGM1NTNlYvZPfj4=: 00:15:10.766 23:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:10.766 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:10.766 23:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:15:10.766 23:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.766 23:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.766 23:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.766 23:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:10.766 23:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:10.766 23:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:11.025 23:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:15:11.025 23:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:11.025 23:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:11.025 23:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:11.025 23:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:11.025 23:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:11.025 23:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:11.025 23:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.025 23:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.025 23:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.025 23:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:11.026 23:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:11.026 23:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:11.594 00:15:11.594 23:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:11.594 23:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:11.594 23:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:12.161 23:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:12.161 23:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:12.161 23:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.161 23:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.162 23:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.162 23:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:12.162 { 00:15:12.162 "cntlid": 139, 00:15:12.162 "qid": 0, 00:15:12.162 "state": "enabled", 00:15:12.162 "thread": "nvmf_tgt_poll_group_000", 00:15:12.162 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a", 00:15:12.162 "listen_address": { 00:15:12.162 "trtype": "TCP", 00:15:12.162 "adrfam": "IPv4", 00:15:12.162 "traddr": "10.0.0.3", 00:15:12.162 "trsvcid": "4420" 00:15:12.162 }, 00:15:12.162 "peer_address": { 00:15:12.162 "trtype": "TCP", 00:15:12.162 "adrfam": "IPv4", 00:15:12.162 "traddr": "10.0.0.1", 00:15:12.162 "trsvcid": "38324" 00:15:12.162 }, 00:15:12.162 "auth": { 00:15:12.162 "state": "completed", 00:15:12.162 "digest": "sha512", 00:15:12.162 "dhgroup": "ffdhe8192" 00:15:12.162 } 00:15:12.162 } 00:15:12.162 ]' 00:15:12.162 23:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:12.162 23:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:12.162 23:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:12.162 23:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:12.162 23:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:12.162 23:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:12.162 23:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:12.162 23:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:12.420 23:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzJkZDZmMDQ2ZTEzZGMyNmE5MzFkNDM3ZDNlZmUwYWUQ1o38: --dhchap-ctrl-secret DHHC-1:02:OTM2NWYxOGFhOGM1YWU5MThlMGE2MGY5YWFiMzZjNDdlYjYwZDVlYWFjMzZiMjhiETNs8g==: 00:15:12.421 23:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --hostid 2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -l 0 --dhchap-secret DHHC-1:01:NzJkZDZmMDQ2ZTEzZGMyNmE5MzFkNDM3ZDNlZmUwYWUQ1o38: --dhchap-ctrl-secret DHHC-1:02:OTM2NWYxOGFhOGM1YWU5MThlMGE2MGY5YWFiMzZjNDdlYjYwZDVlYWFjMzZiMjhiETNs8g==: 00:15:13.355 23:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:13.355 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:13.355 23:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:15:13.355 23:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.355 23:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.355 23:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.356 23:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:13.356 23:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:13.356 23:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:13.614 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:15:13.614 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:13.614 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:13.614 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:13.614 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:13.614 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:13.615 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:13.615 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.615 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.615 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.615 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:13.615 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:13.615 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:14.182 00:15:14.182 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:14.182 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:14.182 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:14.441 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:14.441 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:14.441 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.442 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.442 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.442 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:14.442 { 00:15:14.442 "cntlid": 141, 00:15:14.442 "qid": 0, 00:15:14.442 "state": "enabled", 00:15:14.442 "thread": "nvmf_tgt_poll_group_000", 00:15:14.442 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a", 00:15:14.442 "listen_address": { 00:15:14.442 "trtype": "TCP", 00:15:14.442 "adrfam": "IPv4", 00:15:14.442 "traddr": "10.0.0.3", 00:15:14.442 "trsvcid": "4420" 00:15:14.442 }, 00:15:14.442 "peer_address": { 00:15:14.442 "trtype": "TCP", 00:15:14.442 "adrfam": "IPv4", 00:15:14.442 "traddr": "10.0.0.1", 00:15:14.442 "trsvcid": "34108" 00:15:14.442 }, 00:15:14.442 "auth": { 00:15:14.442 "state": "completed", 00:15:14.442 "digest": "sha512", 00:15:14.442 "dhgroup": "ffdhe8192" 00:15:14.442 } 00:15:14.442 } 00:15:14.442 ]' 00:15:14.442 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:14.442 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:14.442 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:14.442 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:14.442 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:14.701 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:14.701 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:14.701 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:14.960 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjViMjFjYTk4NGFlNDUyODJkOGQwZjgzNTE3YzFhZjIzZTIyNDA5MDYwOGNjZGJmVA9WuA==: --dhchap-ctrl-secret DHHC-1:01:YTJjYjQ5ZDJmMTYwNWRhNTIyYzc1ODJhYTYzYmQ0ZWHxbK6/: 00:15:14.960 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --hostid 2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -l 0 --dhchap-secret DHHC-1:02:YjViMjFjYTk4NGFlNDUyODJkOGQwZjgzNTE3YzFhZjIzZTIyNDA5MDYwOGNjZGJmVA9WuA==: --dhchap-ctrl-secret DHHC-1:01:YTJjYjQ5ZDJmMTYwNWRhNTIyYzc1ODJhYTYzYmQ0ZWHxbK6/: 00:15:15.528 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:15.528 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:15.528 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:15:15.528 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.528 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.528 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.528 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:15.528 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:15.528 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:15.788 23:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:15:15.788 23:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:15.788 23:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:15.788 23:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:15.788 23:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:15.788 23:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:15.788 23:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --dhchap-key key3 00:15:15.788 23:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.788 23:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.788 23:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.788 23:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:15.788 23:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:15.788 23:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:16.355 00:15:16.355 23:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:16.355 23:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:16.355 23:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:16.615 23:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:16.615 23:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:16.615 23:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.615 23:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.615 23:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.615 23:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:16.615 { 00:15:16.615 "cntlid": 143, 00:15:16.615 "qid": 0, 00:15:16.615 "state": "enabled", 00:15:16.615 "thread": "nvmf_tgt_poll_group_000", 00:15:16.615 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a", 00:15:16.615 "listen_address": { 00:15:16.615 "trtype": "TCP", 00:15:16.615 "adrfam": "IPv4", 00:15:16.615 "traddr": "10.0.0.3", 00:15:16.615 "trsvcid": "4420" 00:15:16.615 }, 00:15:16.615 "peer_address": { 00:15:16.615 "trtype": "TCP", 00:15:16.615 "adrfam": "IPv4", 00:15:16.615 "traddr": "10.0.0.1", 00:15:16.615 "trsvcid": "34150" 00:15:16.615 }, 00:15:16.615 "auth": { 00:15:16.615 "state": "completed", 00:15:16.615 "digest": "sha512", 00:15:16.615 "dhgroup": "ffdhe8192" 00:15:16.615 } 00:15:16.615 } 00:15:16.615 ]' 00:15:16.615 23:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:16.615 23:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:16.615 23:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:16.874 23:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:16.874 23:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:16.874 23:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:16.874 23:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:16.874 23:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:17.132 23:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjUwYzA1MTFjNzhlYTc4NjY1MTA3YzBkMmQzNDhmOThkODM2Yzc2Njk2NzQyNGI2YzMxZGE0MjU3OGQxNTNiMQdxwhM=: 00:15:17.132 23:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --hostid 2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -l 0 --dhchap-secret DHHC-1:03:NjUwYzA1MTFjNzhlYTc4NjY1MTA3YzBkMmQzNDhmOThkODM2Yzc2Njk2NzQyNGI2YzMxZGE0MjU3OGQxNTNiMQdxwhM=: 00:15:17.700 23:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:17.700 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:17.700 23:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:15:17.700 23:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.700 23:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.700 23:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.700 23:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:15:17.700 23:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:15:17.700 23:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:15:17.700 23:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:17.700 23:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:17.700 23:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:17.966 23:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:15:17.966 23:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:17.966 23:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:17.966 23:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:17.966 23:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:17.966 23:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:17.967 23:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:17.967 23:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.967 23:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.967 23:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.967 23:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:17.967 23:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:17.967 23:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:18.534 00:15:18.534 23:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:18.534 23:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:18.534 23:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:18.793 23:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:18.793 23:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:18.793 23:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.793 23:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.793 23:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.793 23:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:18.793 { 00:15:18.793 "cntlid": 145, 00:15:18.793 "qid": 0, 00:15:18.793 "state": "enabled", 00:15:18.793 "thread": "nvmf_tgt_poll_group_000", 00:15:18.793 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a", 00:15:18.793 "listen_address": { 00:15:18.793 "trtype": "TCP", 00:15:18.793 "adrfam": "IPv4", 00:15:18.793 "traddr": "10.0.0.3", 00:15:18.793 "trsvcid": "4420" 00:15:18.793 }, 00:15:18.793 "peer_address": { 00:15:18.793 "trtype": "TCP", 00:15:18.793 "adrfam": "IPv4", 00:15:18.793 "traddr": "10.0.0.1", 00:15:18.793 "trsvcid": "34172" 00:15:18.793 }, 00:15:18.793 "auth": { 00:15:18.793 "state": "completed", 00:15:18.793 "digest": "sha512", 00:15:18.793 "dhgroup": "ffdhe8192" 00:15:18.793 } 00:15:18.793 } 00:15:18.793 ]' 00:15:18.793 23:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:19.052 23:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:19.052 23:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:19.052 23:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:19.052 23:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:19.052 23:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:19.052 23:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:19.052 23:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:19.311 23:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjhhY2Y4YjY1YTg2M2Y0OGZlZWEyNjdhOTI0ODkzNjA2MmI5NTE0NjUzMmMzNzU5krlUGw==: --dhchap-ctrl-secret DHHC-1:03:NDk1YzM1YTc0OGM3NDJmZjYxY2M0MzI5YWRmODg3NGZlOTFkZDViZWNiNmFjYzE4YmQ1MjRlMjkyOGM1NTNlYvZPfj4=: 00:15:19.311 23:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --hostid 2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -l 0 --dhchap-secret DHHC-1:00:YjhhY2Y4YjY1YTg2M2Y0OGZlZWEyNjdhOTI0ODkzNjA2MmI5NTE0NjUzMmMzNzU5krlUGw==: --dhchap-ctrl-secret DHHC-1:03:NDk1YzM1YTc0OGM3NDJmZjYxY2M0MzI5YWRmODg3NGZlOTFkZDViZWNiNmFjYzE4YmQ1MjRlMjkyOGM1NTNlYvZPfj4=: 00:15:19.880 23:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:19.880 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:19.880 23:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:15:19.880 23:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.880 23:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.880 23:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.880 23:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --dhchap-key key1 00:15:19.880 23:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.880 23:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.880 23:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.880 23:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:15:19.880 23:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:19.880 23:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:15:19.880 23:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:15:19.880 23:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:19.880 23:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:15:19.880 23:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:19.880 23:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:15:19.880 23:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:15:19.880 23:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:15:20.816 request: 00:15:20.816 { 00:15:20.816 "name": "nvme0", 00:15:20.816 "trtype": "tcp", 00:15:20.816 "traddr": "10.0.0.3", 00:15:20.816 "adrfam": "ipv4", 00:15:20.816 "trsvcid": "4420", 00:15:20.816 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:20.816 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a", 00:15:20.817 "prchk_reftag": false, 00:15:20.817 "prchk_guard": false, 00:15:20.817 "hdgst": false, 00:15:20.817 "ddgst": false, 00:15:20.817 "dhchap_key": "key2", 00:15:20.817 "allow_unrecognized_csi": false, 00:15:20.817 "method": "bdev_nvme_attach_controller", 00:15:20.817 "req_id": 1 00:15:20.817 } 00:15:20.817 Got JSON-RPC error response 00:15:20.817 response: 00:15:20.817 { 00:15:20.817 "code": -5, 00:15:20.817 "message": "Input/output error" 00:15:20.817 } 00:15:20.817 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:20.817 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:20.817 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:20.817 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:20.817 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:15:20.817 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.817 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.817 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.817 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:20.817 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.817 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.817 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.817 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:20.817 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:20.817 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:20.817 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:15:20.817 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:20.817 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:15:20.817 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:20.817 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:20.817 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:20.817 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:21.385 request: 00:15:21.385 { 00:15:21.385 "name": "nvme0", 00:15:21.385 "trtype": "tcp", 00:15:21.385 "traddr": "10.0.0.3", 00:15:21.385 "adrfam": "ipv4", 00:15:21.385 "trsvcid": "4420", 00:15:21.385 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:21.385 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a", 00:15:21.385 "prchk_reftag": false, 00:15:21.385 "prchk_guard": false, 00:15:21.385 "hdgst": false, 00:15:21.385 "ddgst": false, 00:15:21.385 "dhchap_key": "key1", 00:15:21.385 "dhchap_ctrlr_key": "ckey2", 00:15:21.385 "allow_unrecognized_csi": false, 00:15:21.385 "method": "bdev_nvme_attach_controller", 00:15:21.385 "req_id": 1 00:15:21.385 } 00:15:21.385 Got JSON-RPC error response 00:15:21.385 response: 00:15:21.385 { 00:15:21.385 "code": -5, 00:15:21.385 "message": "Input/output error" 00:15:21.385 } 00:15:21.385 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:21.385 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:21.385 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:21.385 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:21.385 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:15:21.385 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.385 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.385 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.385 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --dhchap-key key1 00:15:21.385 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.385 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.385 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.385 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:21.385 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:21.385 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:21.385 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:15:21.385 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:21.385 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:15:21.385 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:21.385 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:21.385 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:21.385 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:21.952 request: 00:15:21.952 { 00:15:21.952 "name": "nvme0", 00:15:21.952 "trtype": "tcp", 00:15:21.952 "traddr": "10.0.0.3", 00:15:21.952 "adrfam": "ipv4", 00:15:21.952 "trsvcid": "4420", 00:15:21.952 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:21.952 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a", 00:15:21.952 "prchk_reftag": false, 00:15:21.952 "prchk_guard": false, 00:15:21.952 "hdgst": false, 00:15:21.952 "ddgst": false, 00:15:21.952 "dhchap_key": "key1", 00:15:21.952 "dhchap_ctrlr_key": "ckey1", 00:15:21.952 "allow_unrecognized_csi": false, 00:15:21.952 "method": "bdev_nvme_attach_controller", 00:15:21.952 "req_id": 1 00:15:21.952 } 00:15:21.952 Got JSON-RPC error response 00:15:21.952 response: 00:15:21.952 { 00:15:21.952 "code": -5, 00:15:21.953 "message": "Input/output error" 00:15:21.953 } 00:15:21.953 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:21.953 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:21.953 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:21.953 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:21.953 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:15:21.953 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.953 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.953 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.953 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 69771 00:15:21.953 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 69771 ']' 00:15:21.953 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 69771 00:15:21.953 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:15:21.953 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:21.953 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69771 00:15:21.953 killing process with pid 69771 00:15:21.953 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:21.953 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:21.953 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69771' 00:15:21.953 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 69771 00:15:21.953 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 69771 00:15:22.890 23:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:15:22.890 23:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:22.890 23:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:22.890 23:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.890 23:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=72787 00:15:22.890 23:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 72787 00:15:22.890 23:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:15:22.890 23:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 72787 ']' 00:15:22.890 23:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:22.890 23:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:22.890 23:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:22.890 23:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:22.890 23:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.827 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:23.827 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:15:23.827 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:23.827 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:23.827 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.827 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:23.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:23.827 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:15:23.827 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 72787 00:15:23.827 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 72787 ']' 00:15:23.827 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:23.827 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:23.827 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:23.827 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:23.827 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.084 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:24.084 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:15:24.084 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:15:24.084 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.084 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.342 null0 00:15:24.342 23:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.342 23:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:15:24.342 23:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.SN3 00:15:24.342 23:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.342 23:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.342 23:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.342 23:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.sYk ]] 00:15:24.342 23:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.sYk 00:15:24.342 23:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.342 23:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.601 23:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.601 23:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:15:24.601 23:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.kGi 00:15:24.601 23:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.601 23:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.601 23:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.601 23:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.dhO ]] 00:15:24.601 23:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.dhO 00:15:24.601 23:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.601 23:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.601 23:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.601 23:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:15:24.601 23:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.t92 00:15:24.601 23:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.601 23:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.601 23:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.601 23:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.s9h ]] 00:15:24.601 23:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.s9h 00:15:24.601 23:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.601 23:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.601 23:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.601 23:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:15:24.601 23:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.xLH 00:15:24.601 23:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.601 23:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.601 23:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.601 23:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:15:24.601 23:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:15:24.601 23:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:24.601 23:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:24.601 23:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:24.601 23:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:24.601 23:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:24.601 23:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --dhchap-key key3 00:15:24.601 23:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.601 23:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.601 23:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.601 23:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:24.602 23:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:24.602 23:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:25.538 nvme0n1 00:15:25.538 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:25.538 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:25.538 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:25.797 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:25.797 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:25.797 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.797 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.797 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.797 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:25.797 { 00:15:25.797 "cntlid": 1, 00:15:25.797 "qid": 0, 00:15:25.797 "state": "enabled", 00:15:25.797 "thread": "nvmf_tgt_poll_group_000", 00:15:25.797 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a", 00:15:25.797 "listen_address": { 00:15:25.797 "trtype": "TCP", 00:15:25.797 "adrfam": "IPv4", 00:15:25.797 "traddr": "10.0.0.3", 00:15:25.797 "trsvcid": "4420" 00:15:25.797 }, 00:15:25.797 "peer_address": { 00:15:25.797 "trtype": "TCP", 00:15:25.797 "adrfam": "IPv4", 00:15:25.797 "traddr": "10.0.0.1", 00:15:25.797 "trsvcid": "55176" 00:15:25.797 }, 00:15:25.797 "auth": { 00:15:25.797 "state": "completed", 00:15:25.797 "digest": "sha512", 00:15:25.797 "dhgroup": "ffdhe8192" 00:15:25.797 } 00:15:25.797 } 00:15:25.797 ]' 00:15:25.797 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:25.797 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:25.797 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:25.797 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:25.797 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:26.056 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:26.056 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:26.056 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:26.315 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjUwYzA1MTFjNzhlYTc4NjY1MTA3YzBkMmQzNDhmOThkODM2Yzc2Njk2NzQyNGI2YzMxZGE0MjU3OGQxNTNiMQdxwhM=: 00:15:26.315 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --hostid 2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -l 0 --dhchap-secret DHHC-1:03:NjUwYzA1MTFjNzhlYTc4NjY1MTA3YzBkMmQzNDhmOThkODM2Yzc2Njk2NzQyNGI2YzMxZGE0MjU3OGQxNTNiMQdxwhM=: 00:15:26.883 23:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:26.883 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:26.883 23:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:15:26.883 23:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.883 23:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.883 23:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.883 23:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --dhchap-key key3 00:15:26.883 23:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.883 23:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.883 23:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.883 23:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:15:26.883 23:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:15:27.142 23:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:15:27.142 23:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:27.142 23:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:15:27.142 23:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:15:27.142 23:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:27.142 23:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:15:27.142 23:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:27.142 23:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:27.142 23:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:27.142 23:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:27.710 request: 00:15:27.710 { 00:15:27.710 "name": "nvme0", 00:15:27.710 "trtype": "tcp", 00:15:27.710 "traddr": "10.0.0.3", 00:15:27.710 "adrfam": "ipv4", 00:15:27.710 "trsvcid": "4420", 00:15:27.710 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:27.710 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a", 00:15:27.710 "prchk_reftag": false, 00:15:27.710 "prchk_guard": false, 00:15:27.710 "hdgst": false, 00:15:27.710 "ddgst": false, 00:15:27.710 "dhchap_key": "key3", 00:15:27.710 "allow_unrecognized_csi": false, 00:15:27.710 "method": "bdev_nvme_attach_controller", 00:15:27.710 "req_id": 1 00:15:27.710 } 00:15:27.710 Got JSON-RPC error response 00:15:27.710 response: 00:15:27.710 { 00:15:27.710 "code": -5, 00:15:27.710 "message": "Input/output error" 00:15:27.710 } 00:15:27.710 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:27.710 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:27.710 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:27.710 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:27.710 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:15:27.710 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:15:27.710 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:15:27.710 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:15:27.710 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:15:27.710 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:27.710 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:15:27.710 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:15:27.710 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:27.710 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:15:27.710 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:27.710 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:27.710 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:27.711 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:27.969 request: 00:15:27.969 { 00:15:27.969 "name": "nvme0", 00:15:27.969 "trtype": "tcp", 00:15:27.969 "traddr": "10.0.0.3", 00:15:27.969 "adrfam": "ipv4", 00:15:27.969 "trsvcid": "4420", 00:15:27.969 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:27.969 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a", 00:15:27.969 "prchk_reftag": false, 00:15:27.969 "prchk_guard": false, 00:15:27.969 "hdgst": false, 00:15:27.969 "ddgst": false, 00:15:27.969 "dhchap_key": "key3", 00:15:27.969 "allow_unrecognized_csi": false, 00:15:27.969 "method": "bdev_nvme_attach_controller", 00:15:27.969 "req_id": 1 00:15:27.969 } 00:15:27.969 Got JSON-RPC error response 00:15:27.969 response: 00:15:27.969 { 00:15:27.969 "code": -5, 00:15:27.969 "message": "Input/output error" 00:15:27.969 } 00:15:27.969 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:27.969 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:27.969 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:27.969 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:27.969 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:15:27.969 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:15:28.228 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:15:28.228 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:28.228 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:28.228 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:28.228 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:15:28.228 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.228 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.228 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.228 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:15:28.228 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.228 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.228 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.228 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:28.228 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:28.228 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:28.487 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:15:28.487 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:28.487 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:15:28.487 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:28.487 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:28.487 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:28.487 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:28.746 request: 00:15:28.746 { 00:15:28.746 "name": "nvme0", 00:15:28.746 "trtype": "tcp", 00:15:28.746 "traddr": "10.0.0.3", 00:15:28.746 "adrfam": "ipv4", 00:15:28.746 "trsvcid": "4420", 00:15:28.746 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:28.746 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a", 00:15:28.746 "prchk_reftag": false, 00:15:28.746 "prchk_guard": false, 00:15:28.746 "hdgst": false, 00:15:28.746 "ddgst": false, 00:15:28.746 "dhchap_key": "key0", 00:15:28.746 "dhchap_ctrlr_key": "key1", 00:15:28.746 "allow_unrecognized_csi": false, 00:15:28.746 "method": "bdev_nvme_attach_controller", 00:15:28.746 "req_id": 1 00:15:28.746 } 00:15:28.746 Got JSON-RPC error response 00:15:28.746 response: 00:15:28.746 { 00:15:28.746 "code": -5, 00:15:28.746 "message": "Input/output error" 00:15:28.746 } 00:15:28.746 23:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:28.746 23:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:28.746 23:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:28.746 23:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:28.746 23:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:15:28.746 23:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:15:28.747 23:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:15:29.315 nvme0n1 00:15:29.315 23:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:15:29.315 23:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:15:29.315 23:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:29.315 23:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:29.315 23:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:29.315 23:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:29.883 23:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --dhchap-key key1 00:15:29.883 23:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.883 23:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.883 23:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.883 23:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:15:29.883 23:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:15:29.883 23:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:15:30.859 nvme0n1 00:15:30.859 23:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:15:30.859 23:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:15:30.859 23:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:30.859 23:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:30.859 23:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:30.859 23:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.859 23:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.859 23:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.859 23:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:15:30.859 23:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:15:30.859 23:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:31.428 23:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:31.428 23:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:YjViMjFjYTk4NGFlNDUyODJkOGQwZjgzNTE3YzFhZjIzZTIyNDA5MDYwOGNjZGJmVA9WuA==: --dhchap-ctrl-secret DHHC-1:03:NjUwYzA1MTFjNzhlYTc4NjY1MTA3YzBkMmQzNDhmOThkODM2Yzc2Njk2NzQyNGI2YzMxZGE0MjU3OGQxNTNiMQdxwhM=: 00:15:31.428 23:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --hostid 2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -l 0 --dhchap-secret DHHC-1:02:YjViMjFjYTk4NGFlNDUyODJkOGQwZjgzNTE3YzFhZjIzZTIyNDA5MDYwOGNjZGJmVA9WuA==: --dhchap-ctrl-secret DHHC-1:03:NjUwYzA1MTFjNzhlYTc4NjY1MTA3YzBkMmQzNDhmOThkODM2Yzc2Njk2NzQyNGI2YzMxZGE0MjU3OGQxNTNiMQdxwhM=: 00:15:31.996 23:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:15:31.996 23:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:15:31.996 23:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:15:31.996 23:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:15:31.996 23:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:15:31.996 23:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:15:31.996 23:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:15:31.997 23:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:31.997 23:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:32.256 23:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:15:32.256 23:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:32.256 23:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:15:32.256 23:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:15:32.256 23:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:32.256 23:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:15:32.256 23:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:32.256 23:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:15:32.256 23:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:15:32.256 23:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:15:32.823 request: 00:15:32.823 { 00:15:32.823 "name": "nvme0", 00:15:32.823 "trtype": "tcp", 00:15:32.823 "traddr": "10.0.0.3", 00:15:32.823 "adrfam": "ipv4", 00:15:32.823 "trsvcid": "4420", 00:15:32.823 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:32.823 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a", 00:15:32.823 "prchk_reftag": false, 00:15:32.823 "prchk_guard": false, 00:15:32.823 "hdgst": false, 00:15:32.823 "ddgst": false, 00:15:32.823 "dhchap_key": "key1", 00:15:32.823 "allow_unrecognized_csi": false, 00:15:32.823 "method": "bdev_nvme_attach_controller", 00:15:32.823 "req_id": 1 00:15:32.823 } 00:15:32.823 Got JSON-RPC error response 00:15:32.823 response: 00:15:32.823 { 00:15:32.823 "code": -5, 00:15:32.823 "message": "Input/output error" 00:15:32.823 } 00:15:32.823 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:32.823 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:32.823 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:32.823 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:32.823 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:32.823 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:32.823 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:33.759 nvme0n1 00:15:33.759 23:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:15:33.759 23:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:15:33.759 23:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:34.018 23:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:34.019 23:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:34.019 23:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:34.278 23:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:15:34.278 23:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.278 23:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.278 23:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.278 23:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:15:34.278 23:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:15:34.278 23:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:15:34.537 nvme0n1 00:15:34.796 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:15:34.796 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:15:34.796 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:35.055 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:35.055 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:35.055 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:35.314 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --dhchap-key key1 --dhchap-ctrlr-key key3 00:15:35.314 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.314 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.314 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.315 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:NzJkZDZmMDQ2ZTEzZGMyNmE5MzFkNDM3ZDNlZmUwYWUQ1o38: '' 2s 00:15:35.315 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:15:35.315 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:15:35.315 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:NzJkZDZmMDQ2ZTEzZGMyNmE5MzFkNDM3ZDNlZmUwYWUQ1o38: 00:15:35.315 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:15:35.315 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:15:35.315 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:15:35.315 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:NzJkZDZmMDQ2ZTEzZGMyNmE5MzFkNDM3ZDNlZmUwYWUQ1o38: ]] 00:15:35.315 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:NzJkZDZmMDQ2ZTEzZGMyNmE5MzFkNDM3ZDNlZmUwYWUQ1o38: 00:15:35.315 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:15:35.315 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:15:35.315 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:15:37.222 23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:15:37.222 23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:15:37.222 23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:15:37.222 23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:15:37.222 23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:15:37.222 23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:15:37.222 23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:15:37.222 23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --dhchap-key key1 --dhchap-ctrlr-key key2 00:15:37.222 23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.222 23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.222 23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.222 23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:YjViMjFjYTk4NGFlNDUyODJkOGQwZjgzNTE3YzFhZjIzZTIyNDA5MDYwOGNjZGJmVA9WuA==: 2s 00:15:37.222 23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:15:37.222 23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:15:37.222 23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:15:37.222 23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:YjViMjFjYTk4NGFlNDUyODJkOGQwZjgzNTE3YzFhZjIzZTIyNDA5MDYwOGNjZGJmVA9WuA==: 00:15:37.222 23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:15:37.222 23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:15:37.222 23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:15:37.222 23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:YjViMjFjYTk4NGFlNDUyODJkOGQwZjgzNTE3YzFhZjIzZTIyNDA5MDYwOGNjZGJmVA9WuA==: ]] 00:15:37.223 23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:YjViMjFjYTk4NGFlNDUyODJkOGQwZjgzNTE3YzFhZjIzZTIyNDA5MDYwOGNjZGJmVA9WuA==: 00:15:37.223 23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:15:37.223 23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:15:39.757 23:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:15:39.757 23:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:15:39.757 23:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:15:39.757 23:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:15:39.757 23:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:15:39.757 23:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:15:39.757 23:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:15:39.757 23:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:39.757 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:39.757 23:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:39.757 23:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.757 23:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.757 23:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.757 23:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:39.757 23:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:39.757 23:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:40.326 nvme0n1 00:15:40.326 23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:40.326 23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.326 23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.326 23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.326 23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:40.326 23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:41.262 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:15:41.262 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:15:41.262 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:41.520 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:41.520 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:15:41.520 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.520 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.520 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.520 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:15:41.520 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:15:41.780 23:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:15:41.780 23:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:15:41.780 23:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:42.038 23:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:42.038 23:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:42.038 23:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.038 23:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.038 23:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.038 23:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:15:42.038 23:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:42.038 23:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:15:42.038 23:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:15:42.038 23:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:42.038 23:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:15:42.038 23:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:42.038 23:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:15:42.038 23:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:15:42.977 request: 00:15:42.977 { 00:15:42.977 "name": "nvme0", 00:15:42.977 "dhchap_key": "key1", 00:15:42.977 "dhchap_ctrlr_key": "key3", 00:15:42.977 "method": "bdev_nvme_set_keys", 00:15:42.977 "req_id": 1 00:15:42.977 } 00:15:42.977 Got JSON-RPC error response 00:15:42.977 response: 00:15:42.977 { 00:15:42.977 "code": -13, 00:15:42.977 "message": "Permission denied" 00:15:42.977 } 00:15:42.977 23:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:42.977 23:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:42.977 23:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:42.977 23:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:42.977 23:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:15:42.977 23:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:15:42.977 23:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:42.977 23:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:15:42.977 23:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:15:44.413 23:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:15:44.413 23:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:15:44.413 23:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:44.413 23:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:15:44.413 23:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:44.413 23:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.413 23:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.413 23:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.413 23:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:44.413 23:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:44.413 23:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:45.792 nvme0n1 00:15:45.792 23:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:45.792 23:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.792 23:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.792 23:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.792 23:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:15:45.792 23:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:45.792 23:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:15:45.792 23:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:15:45.792 23:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:45.792 23:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:15:45.792 23:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:45.792 23:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:15:45.792 23:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:15:46.361 request: 00:15:46.361 { 00:15:46.361 "name": "nvme0", 00:15:46.361 "dhchap_key": "key2", 00:15:46.361 "dhchap_ctrlr_key": "key0", 00:15:46.361 "method": "bdev_nvme_set_keys", 00:15:46.361 "req_id": 1 00:15:46.361 } 00:15:46.361 Got JSON-RPC error response 00:15:46.361 response: 00:15:46.361 { 00:15:46.361 "code": -13, 00:15:46.361 "message": "Permission denied" 00:15:46.361 } 00:15:46.361 23:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:46.361 23:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:46.361 23:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:46.361 23:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:46.361 23:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:15:46.361 23:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:15:46.361 23:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:46.621 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:15:46.621 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:15:47.558 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:15:47.558 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:15:47.558 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:47.818 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:15:47.818 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:15:47.818 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:15:47.818 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 69803 00:15:47.818 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 69803 ']' 00:15:47.818 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 69803 00:15:47.818 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:15:47.818 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:47.818 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69803 00:15:47.818 killing process with pid 69803 00:15:47.818 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:47.818 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:47.818 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69803' 00:15:47.818 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 69803 00:15:47.818 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 69803 00:15:49.724 23:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:15:49.724 23:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:49.724 23:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:15:49.724 23:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:49.724 23:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:15:49.724 23:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:49.724 23:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:49.724 rmmod nvme_tcp 00:15:49.724 rmmod nvme_fabrics 00:15:49.983 rmmod nvme_keyring 00:15:49.983 23:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:49.983 23:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:15:49.983 23:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:15:49.983 23:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 72787 ']' 00:15:49.983 23:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 72787 00:15:49.983 23:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 72787 ']' 00:15:49.983 23:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 72787 00:15:49.983 23:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:15:49.983 23:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:49.983 23:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72787 00:15:49.983 killing process with pid 72787 00:15:49.983 23:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:49.983 23:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:49.983 23:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72787' 00:15:49.983 23:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 72787 00:15:49.983 23:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 72787 00:15:50.928 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:50.928 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:50.928 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:50.928 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:15:50.928 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:50.928 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:15:50.928 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:15:50.928 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:50.928 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:50.928 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:50.928 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:50.928 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:50.928 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:50.928 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:50.928 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:50.928 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:50.928 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:50.928 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:50.928 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:50.928 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:50.928 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:50.928 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:50.928 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:50.928 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:50.928 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:50.928 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:51.188 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@300 -- # return 0 00:15:51.188 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.SN3 /tmp/spdk.key-sha256.kGi /tmp/spdk.key-sha384.t92 /tmp/spdk.key-sha512.xLH /tmp/spdk.key-sha512.sYk /tmp/spdk.key-sha384.dhO /tmp/spdk.key-sha256.s9h '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:15:51.188 00:15:51.188 real 3m11.208s 00:15:51.188 user 7m35.498s 00:15:51.188 sys 0m27.364s 00:15:51.188 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:51.188 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.188 ************************************ 00:15:51.188 END TEST nvmf_auth_target 00:15:51.188 ************************************ 00:15:51.188 23:59:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:15:51.188 23:59:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:15:51.188 23:59:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:51.188 23:59:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:51.188 23:59:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:51.188 ************************************ 00:15:51.188 START TEST nvmf_bdevio_no_huge 00:15:51.188 ************************************ 00:15:51.188 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:15:51.188 * Looking for test storage... 00:15:51.188 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:51.188 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:51.188 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lcov --version 00:15:51.188 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:51.188 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:51.188 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:51.188 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:51.188 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:51.188 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:15:51.188 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:15:51.188 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:15:51.188 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:15:51.188 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:15:51.188 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:15:51.188 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:15:51.188 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:51.188 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:15:51.188 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:15:51.188 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:51.188 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:51.188 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:15:51.188 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:15:51.188 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:51.448 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:15:51.449 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:15:51.449 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:15:51.449 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:15:51.449 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:51.449 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:15:51.449 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:15:51.449 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:51.449 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:51.449 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:15:51.449 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:51.449 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:51.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:51.449 --rc genhtml_branch_coverage=1 00:15:51.449 --rc genhtml_function_coverage=1 00:15:51.449 --rc genhtml_legend=1 00:15:51.449 --rc geninfo_all_blocks=1 00:15:51.449 --rc geninfo_unexecuted_blocks=1 00:15:51.449 00:15:51.449 ' 00:15:51.449 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:51.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:51.449 --rc genhtml_branch_coverage=1 00:15:51.449 --rc genhtml_function_coverage=1 00:15:51.449 --rc genhtml_legend=1 00:15:51.449 --rc geninfo_all_blocks=1 00:15:51.449 --rc geninfo_unexecuted_blocks=1 00:15:51.449 00:15:51.449 ' 00:15:51.449 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:51.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:51.449 --rc genhtml_branch_coverage=1 00:15:51.449 --rc genhtml_function_coverage=1 00:15:51.449 --rc genhtml_legend=1 00:15:51.449 --rc geninfo_all_blocks=1 00:15:51.449 --rc geninfo_unexecuted_blocks=1 00:15:51.449 00:15:51.449 ' 00:15:51.449 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:51.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:51.449 --rc genhtml_branch_coverage=1 00:15:51.449 --rc genhtml_function_coverage=1 00:15:51.449 --rc genhtml_legend=1 00:15:51.449 --rc geninfo_all_blocks=1 00:15:51.449 --rc geninfo_unexecuted_blocks=1 00:15:51.449 00:15:51.449 ' 00:15:51.449 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:51.449 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:15:51.449 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:51.449 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:51.449 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:51.449 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:51.449 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:51.449 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:51.449 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:51.449 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:51.449 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:51.449 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:51.449 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:15:51.449 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:15:51.449 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:51.449 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:51.449 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:51.449 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:51.449 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:51.449 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:15:51.449 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:51.449 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:51.449 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:51.449 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:51.449 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:51.449 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:51.449 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:15:51.449 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:51.449 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:15:51.449 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:51.449 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:51.449 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:51.449 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:51.449 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:51.449 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:51.449 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:51.449 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:51.449 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:51.449 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:51.449 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:51.449 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:51.449 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:15:51.449 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:51.449 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:51.449 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:51.449 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:51.449 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:51.449 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:51.449 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:51.449 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:51.449 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:51.449 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:51.449 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:51.449 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:51.449 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:51.449 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:51.449 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:51.449 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:51.449 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:51.449 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:51.449 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:51.450 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:51.450 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:51.450 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:51.450 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:51.450 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:51.450 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:51.450 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:51.450 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:51.450 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:51.450 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:51.450 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:51.450 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:51.450 Cannot find device "nvmf_init_br" 00:15:51.450 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:15:51.450 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:51.450 Cannot find device "nvmf_init_br2" 00:15:51.450 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:15:51.450 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:51.450 Cannot find device "nvmf_tgt_br" 00:15:51.450 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # true 00:15:51.450 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:51.450 Cannot find device "nvmf_tgt_br2" 00:15:51.450 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # true 00:15:51.450 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:51.450 Cannot find device "nvmf_init_br" 00:15:51.450 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # true 00:15:51.450 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:51.450 Cannot find device "nvmf_init_br2" 00:15:51.450 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # true 00:15:51.450 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:51.450 Cannot find device "nvmf_tgt_br" 00:15:51.450 23:59:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # true 00:15:51.450 23:59:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:51.450 Cannot find device "nvmf_tgt_br2" 00:15:51.450 23:59:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # true 00:15:51.450 23:59:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:51.450 Cannot find device "nvmf_br" 00:15:51.450 23:59:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # true 00:15:51.450 23:59:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:51.450 Cannot find device "nvmf_init_if" 00:15:51.450 23:59:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # true 00:15:51.450 23:59:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:51.450 Cannot find device "nvmf_init_if2" 00:15:51.450 23:59:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # true 00:15:51.450 23:59:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:51.450 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:51.450 23:59:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # true 00:15:51.450 23:59:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:51.450 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:51.450 23:59:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # true 00:15:51.450 23:59:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:51.450 23:59:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:51.450 23:59:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:51.450 23:59:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:51.450 23:59:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:51.450 23:59:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:51.450 23:59:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:51.709 23:59:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:51.709 23:59:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:51.709 23:59:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:51.709 23:59:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:51.709 23:59:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:51.709 23:59:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:51.709 23:59:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:51.709 23:59:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:51.709 23:59:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:51.709 23:59:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:51.709 23:59:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:51.709 23:59:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:51.710 23:59:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:51.710 23:59:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:51.710 23:59:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:51.710 23:59:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:51.710 23:59:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:51.710 23:59:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:51.710 23:59:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:51.710 23:59:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:51.710 23:59:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:51.710 23:59:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:51.710 23:59:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:51.710 23:59:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:51.710 23:59:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:51.710 23:59:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:51.710 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:51.710 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.074 ms 00:15:51.710 00:15:51.710 --- 10.0.0.3 ping statistics --- 00:15:51.710 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:51.710 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:15:51.710 23:59:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:51.710 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:51.710 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.064 ms 00:15:51.710 00:15:51.710 --- 10.0.0.4 ping statistics --- 00:15:51.710 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:51.710 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:15:51.710 23:59:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:51.710 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:51.710 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:15:51.710 00:15:51.710 --- 10.0.0.1 ping statistics --- 00:15:51.710 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:51.710 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:15:51.710 23:59:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:51.710 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:51.710 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:15:51.710 00:15:51.710 --- 10.0.0.2 ping statistics --- 00:15:51.710 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:51.710 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:15:51.710 23:59:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:51.710 23:59:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@461 -- # return 0 00:15:51.710 23:59:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:51.710 23:59:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:51.710 23:59:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:51.710 23:59:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:51.710 23:59:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:51.710 23:59:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:51.710 23:59:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:51.710 23:59:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:15:51.710 23:59:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:51.710 23:59:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:51.710 23:59:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:51.710 23:59:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=73463 00:15:51.710 23:59:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:15:51.710 23:59:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 73463 00:15:51.710 23:59:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 73463 ']' 00:15:51.710 23:59:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:51.710 23:59:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:51.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:51.710 23:59:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:51.710 23:59:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:51.710 23:59:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:51.970 [2024-11-18 23:59:58.494095] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:15:51.970 [2024-11-18 23:59:58.494267] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:15:52.230 [2024-11-18 23:59:58.716567] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:52.230 [2024-11-18 23:59:58.893057] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:52.230 [2024-11-18 23:59:58.893152] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:52.230 [2024-11-18 23:59:58.893185] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:52.230 [2024-11-18 23:59:58.893202] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:52.230 [2024-11-18 23:59:58.893215] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:52.230 [2024-11-18 23:59:58.895231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:15:52.230 [2024-11-18 23:59:58.895390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:15:52.230 [2024-11-18 23:59:58.895512] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:15:52.230 [2024-11-18 23:59:58.896069] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:52.490 [2024-11-18 23:59:59.069215] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:53.060 23:59:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:53.060 23:59:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:15:53.060 23:59:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:53.060 23:59:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:53.060 23:59:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:53.060 23:59:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:53.060 23:59:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:53.060 23:59:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.060 23:59:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:53.060 [2024-11-18 23:59:59.546807] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:53.060 23:59:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.060 23:59:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:53.060 23:59:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.060 23:59:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:53.060 Malloc0 00:15:53.060 23:59:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.060 23:59:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:53.060 23:59:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.060 23:59:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:53.060 23:59:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.060 23:59:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:53.060 23:59:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.060 23:59:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:53.060 23:59:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.060 23:59:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:53.060 23:59:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.060 23:59:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:53.060 [2024-11-18 23:59:59.646627] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:53.060 23:59:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.060 23:59:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:15:53.060 23:59:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:15:53.060 23:59:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:15:53.060 23:59:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:15:53.060 23:59:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:53.060 23:59:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:53.060 { 00:15:53.060 "params": { 00:15:53.060 "name": "Nvme$subsystem", 00:15:53.060 "trtype": "$TEST_TRANSPORT", 00:15:53.060 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:53.060 "adrfam": "ipv4", 00:15:53.060 "trsvcid": "$NVMF_PORT", 00:15:53.060 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:53.060 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:53.060 "hdgst": ${hdgst:-false}, 00:15:53.060 "ddgst": ${ddgst:-false} 00:15:53.060 }, 00:15:53.060 "method": "bdev_nvme_attach_controller" 00:15:53.060 } 00:15:53.060 EOF 00:15:53.060 )") 00:15:53.060 23:59:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:15:53.060 23:59:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:15:53.060 23:59:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:15:53.060 23:59:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:15:53.060 "params": { 00:15:53.060 "name": "Nvme1", 00:15:53.060 "trtype": "tcp", 00:15:53.060 "traddr": "10.0.0.3", 00:15:53.060 "adrfam": "ipv4", 00:15:53.060 "trsvcid": "4420", 00:15:53.060 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:53.060 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:53.060 "hdgst": false, 00:15:53.060 "ddgst": false 00:15:53.060 }, 00:15:53.060 "method": "bdev_nvme_attach_controller" 00:15:53.060 }' 00:15:53.319 [2024-11-18 23:59:59.758925] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:15:53.319 [2024-11-18 23:59:59.759620] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid73503 ] 00:15:53.319 [2024-11-18 23:59:59.973535] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:53.578 [2024-11-19 00:00:00.147180] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:53.578 [2024-11-19 00:00:00.147826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:53.578 [2024-11-19 00:00:00.147843] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:53.837 [2024-11-19 00:00:00.309158] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:54.096 I/O targets: 00:15:54.096 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:15:54.096 00:15:54.096 00:15:54.096 CUnit - A unit testing framework for C - Version 2.1-3 00:15:54.096 http://cunit.sourceforge.net/ 00:15:54.096 00:15:54.096 00:15:54.096 Suite: bdevio tests on: Nvme1n1 00:15:54.096 Test: blockdev write read block ...passed 00:15:54.096 Test: blockdev write zeroes read block ...passed 00:15:54.096 Test: blockdev write zeroes read no split ...passed 00:15:54.096 Test: blockdev write zeroes read split ...passed 00:15:54.096 Test: blockdev write zeroes read split partial ...passed 00:15:54.096 Test: blockdev reset ...[2024-11-19 00:00:00.663270] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:15:54.096 [2024-11-19 00:00:00.663438] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000029c00 (9): Bad file descriptor 00:15:54.097 [2024-11-19 00:00:00.683928] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:15:54.097 passed 00:15:54.097 Test: blockdev write read 8 blocks ...passed 00:15:54.097 Test: blockdev write read size > 128k ...passed 00:15:54.097 Test: blockdev write read invalid size ...passed 00:15:54.097 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:54.097 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:54.097 Test: blockdev write read max offset ...passed 00:15:54.097 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:54.097 Test: blockdev writev readv 8 blocks ...passed 00:15:54.097 Test: blockdev writev readv 30 x 1block ...passed 00:15:54.097 Test: blockdev writev readv block ...passed 00:15:54.097 Test: blockdev writev readv size > 128k ...passed 00:15:54.097 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:54.097 Test: blockdev comparev and writev ...[2024-11-19 00:00:00.696079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:54.097 [2024-11-19 00:00:00.696258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:54.097 [2024-11-19 00:00:00.696379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:54.097 [2024-11-19 00:00:00.696484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:15:54.097 [2024-11-19 00:00:00.696982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:54.097 [2024-11-19 00:00:00.697119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:15:54.097 [2024-11-19 00:00:00.697224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:54.097 [2024-11-19 00:00:00.697326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:15:54.097 [2024-11-19 00:00:00.697877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:54.097 [2024-11-19 00:00:00.697986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:15:54.097 [2024-11-19 00:00:00.698091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:54.097 [2024-11-19 00:00:00.698187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:15:54.097 [2024-11-19 00:00:00.698653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:54.097 [2024-11-19 00:00:00.698763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:15:54.097 [2024-11-19 00:00:00.698869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:54.097 [2024-11-19 00:00:00.698969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:15:54.097 passed 00:15:54.097 Test: blockdev nvme passthru rw ...passed 00:15:54.097 Test: blockdev nvme passthru vendor specific ...[2024-11-19 00:00:00.700132] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:54.097 [2024-11-19 00:00:00.700274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:15:54.097 [2024-11-19 00:00:00.700515] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:54.097 [2024-11-19 00:00:00.700813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:15:54.097 [2024-11-19 00:00:00.701097] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:54.097 [2024-11-19 00:00:00.701221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:15:54.097 [2024-11-19 00:00:00.701457] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:54.097 [2024-11-19 00:00:00.701578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:15:54.097 passed 00:15:54.097 Test: blockdev nvme admin passthru ...passed 00:15:54.097 Test: blockdev copy ...passed 00:15:54.097 00:15:54.097 Run Summary: Type Total Ran Passed Failed Inactive 00:15:54.097 suites 1 1 n/a 0 0 00:15:54.097 tests 23 23 23 0 0 00:15:54.097 asserts 152 152 152 0 n/a 00:15:54.097 00:15:54.097 Elapsed time = 0.258 seconds 00:15:55.031 00:00:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:55.031 00:00:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.031 00:00:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:55.031 00:00:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.031 00:00:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:15:55.031 00:00:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:15:55.031 00:00:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:55.031 00:00:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:15:55.031 00:00:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:55.031 00:00:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:15:55.031 00:00:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:55.031 00:00:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:55.031 rmmod nvme_tcp 00:15:55.031 rmmod nvme_fabrics 00:15:55.031 rmmod nvme_keyring 00:15:55.031 00:00:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:55.031 00:00:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:15:55.031 00:00:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:15:55.031 00:00:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 73463 ']' 00:15:55.031 00:00:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 73463 00:15:55.031 00:00:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 73463 ']' 00:15:55.031 00:00:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 73463 00:15:55.031 00:00:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:15:55.031 00:00:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:55.031 00:00:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73463 00:15:55.031 00:00:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:15:55.031 00:00:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:15:55.031 killing process with pid 73463 00:15:55.031 00:00:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73463' 00:15:55.031 00:00:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 73463 00:15:55.031 00:00:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 73463 00:15:55.968 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:55.968 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:55.968 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:55.968 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:15:55.968 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:15:55.968 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:55.968 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:15:55.968 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:55.968 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:55.968 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:55.968 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:55.968 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:55.968 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:55.968 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:55.968 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:55.968 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:55.968 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:55.968 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:55.968 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:55.968 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:56.227 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:56.227 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:56.227 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:56.227 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:56.227 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:56.227 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:56.227 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@300 -- # return 0 00:15:56.227 00:15:56.227 real 0m5.032s 00:15:56.227 user 0m17.112s 00:15:56.227 sys 0m1.629s 00:15:56.227 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:56.227 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:56.227 ************************************ 00:15:56.227 END TEST nvmf_bdevio_no_huge 00:15:56.227 ************************************ 00:15:56.227 00:00:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:15:56.227 00:00:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:56.227 00:00:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:56.227 00:00:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:56.227 ************************************ 00:15:56.227 START TEST nvmf_tls 00:15:56.227 ************************************ 00:15:56.227 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:15:56.227 * Looking for test storage... 00:15:56.227 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:56.227 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:56.227 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lcov --version 00:15:56.227 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:56.487 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:56.487 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:56.487 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:56.487 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:56.487 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:15:56.487 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:15:56.487 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:15:56.487 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:15:56.487 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:15:56.487 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:15:56.487 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:15:56.487 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:56.487 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:15:56.487 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:15:56.487 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:56.487 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:56.487 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:15:56.487 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:15:56.487 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:56.487 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:15:56.487 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:15:56.487 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:15:56.487 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:15:56.487 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:56.487 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:15:56.487 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:15:56.487 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:56.488 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:56.488 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:15:56.488 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:56.488 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:56.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:56.488 --rc genhtml_branch_coverage=1 00:15:56.488 --rc genhtml_function_coverage=1 00:15:56.488 --rc genhtml_legend=1 00:15:56.488 --rc geninfo_all_blocks=1 00:15:56.488 --rc geninfo_unexecuted_blocks=1 00:15:56.488 00:15:56.488 ' 00:15:56.488 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:56.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:56.488 --rc genhtml_branch_coverage=1 00:15:56.488 --rc genhtml_function_coverage=1 00:15:56.488 --rc genhtml_legend=1 00:15:56.488 --rc geninfo_all_blocks=1 00:15:56.488 --rc geninfo_unexecuted_blocks=1 00:15:56.488 00:15:56.488 ' 00:15:56.488 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:56.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:56.488 --rc genhtml_branch_coverage=1 00:15:56.488 --rc genhtml_function_coverage=1 00:15:56.488 --rc genhtml_legend=1 00:15:56.488 --rc geninfo_all_blocks=1 00:15:56.488 --rc geninfo_unexecuted_blocks=1 00:15:56.488 00:15:56.488 ' 00:15:56.488 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:56.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:56.488 --rc genhtml_branch_coverage=1 00:15:56.488 --rc genhtml_function_coverage=1 00:15:56.488 --rc genhtml_legend=1 00:15:56.488 --rc geninfo_all_blocks=1 00:15:56.488 --rc geninfo_unexecuted_blocks=1 00:15:56.488 00:15:56.488 ' 00:15:56.488 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:56.488 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:15:56.488 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:56.488 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:56.488 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:56.488 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:56.488 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:56.488 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:56.488 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:56.488 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:56.488 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:56.488 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:56.488 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:15:56.488 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:15:56.488 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:56.488 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:56.488 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:56.488 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:56.488 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:56.488 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:15:56.488 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:56.488 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:56.488 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:56.488 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.488 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.488 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.488 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:15:56.488 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.488 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:15:56.488 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:56.488 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:56.488 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:56.488 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:56.488 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:56.488 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:56.488 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:56.488 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:56.488 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:56.488 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:56.488 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:56.488 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:15:56.488 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:56.488 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:56.488 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:56.488 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:56.488 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:56.488 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:56.488 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:56.488 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:56.488 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:56.488 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:56.488 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:56.488 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:56.488 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:56.488 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:56.488 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:56.488 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:56.488 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:56.488 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:56.488 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:56.488 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:56.488 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:56.488 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:56.488 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:56.488 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:56.488 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:56.488 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:56.488 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:56.488 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:56.488 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:56.488 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:56.488 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:56.488 Cannot find device "nvmf_init_br" 00:15:56.488 00:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # true 00:15:56.488 00:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:56.488 Cannot find device "nvmf_init_br2" 00:15:56.488 00:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # true 00:15:56.488 00:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:56.488 Cannot find device "nvmf_tgt_br" 00:15:56.488 00:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # true 00:15:56.488 00:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:56.488 Cannot find device "nvmf_tgt_br2" 00:15:56.488 00:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # true 00:15:56.488 00:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:56.488 Cannot find device "nvmf_init_br" 00:15:56.488 00:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # true 00:15:56.488 00:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:56.488 Cannot find device "nvmf_init_br2" 00:15:56.488 00:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # true 00:15:56.488 00:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:56.488 Cannot find device "nvmf_tgt_br" 00:15:56.488 00:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # true 00:15:56.488 00:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:56.488 Cannot find device "nvmf_tgt_br2" 00:15:56.488 00:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # true 00:15:56.488 00:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:56.488 Cannot find device "nvmf_br" 00:15:56.488 00:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # true 00:15:56.488 00:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:56.488 Cannot find device "nvmf_init_if" 00:15:56.488 00:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # true 00:15:56.488 00:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:56.488 Cannot find device "nvmf_init_if2" 00:15:56.488 00:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # true 00:15:56.488 00:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:56.488 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:56.488 00:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # true 00:15:56.488 00:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:56.488 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:56.488 00:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # true 00:15:56.488 00:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:56.488 00:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:56.488 00:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:56.488 00:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:56.488 00:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:56.488 00:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:56.747 00:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:56.747 00:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:56.747 00:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:56.747 00:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:56.747 00:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:56.747 00:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:56.747 00:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:56.748 00:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:56.748 00:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:56.748 00:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:56.748 00:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:56.748 00:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:56.748 00:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:56.748 00:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:56.748 00:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:56.748 00:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:56.748 00:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:56.748 00:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:56.748 00:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:56.748 00:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:56.748 00:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:56.748 00:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:56.748 00:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:56.748 00:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:56.748 00:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:56.748 00:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:56.748 00:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:56.748 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:56.748 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.075 ms 00:15:56.748 00:15:56.748 --- 10.0.0.3 ping statistics --- 00:15:56.748 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:56.748 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:15:56.748 00:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:56.748 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:56.748 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.049 ms 00:15:56.748 00:15:56.748 --- 10.0.0.4 ping statistics --- 00:15:56.748 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:56.748 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:15:56.748 00:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:56.748 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:56.748 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:15:56.748 00:15:56.748 --- 10.0.0.1 ping statistics --- 00:15:56.748 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:56.748 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:15:56.748 00:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:56.748 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:56.748 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.046 ms 00:15:56.748 00:15:56.748 --- 10.0.0.2 ping statistics --- 00:15:56.748 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:56.748 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:15:56.748 00:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:56.748 00:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@461 -- # return 0 00:15:56.748 00:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:56.748 00:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:56.748 00:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:56.748 00:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:56.748 00:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:56.748 00:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:56.748 00:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:56.748 00:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:15:56.748 00:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:56.748 00:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:56.748 00:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:56.748 00:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=73788 00:15:56.748 00:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:15:56.748 00:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 73788 00:15:56.748 00:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 73788 ']' 00:15:56.748 00:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:56.748 00:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:56.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:56.748 00:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:56.748 00:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:56.748 00:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:57.007 [2024-11-19 00:00:03.504015] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:15:57.007 [2024-11-19 00:00:03.504205] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:57.007 [2024-11-19 00:00:03.688940] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:57.325 [2024-11-19 00:00:03.780498] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:57.325 [2024-11-19 00:00:03.780644] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:57.325 [2024-11-19 00:00:03.780681] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:57.325 [2024-11-19 00:00:03.780704] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:57.325 [2024-11-19 00:00:03.780718] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:57.325 [2024-11-19 00:00:03.782034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:57.921 00:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:57.921 00:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:57.921 00:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:57.921 00:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:57.921 00:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:57.921 00:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:57.921 00:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:15:57.921 00:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:15:58.180 true 00:15:58.180 00:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:15:58.180 00:00:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:58.439 00:00:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:15:58.439 00:00:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:15:58.439 00:00:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:15:58.698 00:00:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:58.698 00:00:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:15:58.957 00:00:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:15:58.957 00:00:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:15:58.957 00:00:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:15:59.216 00:00:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:59.216 00:00:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:15:59.474 00:00:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:15:59.474 00:00:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:15:59.474 00:00:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:15:59.474 00:00:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:00.041 00:00:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:16:00.041 00:00:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:16:00.041 00:00:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:16:00.041 00:00:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:00.041 00:00:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:16:00.608 00:00:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:16:00.608 00:00:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:16:00.608 00:00:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:16:00.608 00:00:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:00.608 00:00:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:16:00.867 00:00:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:16:00.867 00:00:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:16:00.867 00:00:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:16:00.867 00:00:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:16:00.867 00:00:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:16:00.867 00:00:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:16:00.867 00:00:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:16:00.867 00:00:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:16:00.867 00:00:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:16:00.867 00:00:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:00.867 00:00:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:16:00.867 00:00:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:16:00.867 00:00:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:16:00.867 00:00:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:16:00.867 00:00:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:16:00.867 00:00:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:16:00.867 00:00:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:16:01.126 00:00:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:01.126 00:00:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:16:01.126 00:00:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.xTTaKAfWxZ 00:16:01.126 00:00:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:16:01.126 00:00:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.MtbfundQT0 00:16:01.126 00:00:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:01.126 00:00:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:01.126 00:00:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.xTTaKAfWxZ 00:16:01.126 00:00:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.MtbfundQT0 00:16:01.126 00:00:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:16:01.384 00:00:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:16:01.642 [2024-11-19 00:00:08.257779] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:01.911 00:00:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.xTTaKAfWxZ 00:16:01.911 00:00:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.xTTaKAfWxZ 00:16:01.911 00:00:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:02.169 [2024-11-19 00:00:08.652649] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:02.170 00:00:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:02.429 00:00:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:16:02.686 [2024-11-19 00:00:09.160857] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:02.686 [2024-11-19 00:00:09.161189] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:02.686 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:02.944 malloc0 00:16:02.944 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:03.202 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.xTTaKAfWxZ 00:16:03.460 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:16:03.718 00:00:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.xTTaKAfWxZ 00:16:15.925 Initializing NVMe Controllers 00:16:15.925 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:16:15.925 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:15.925 Initialization complete. Launching workers. 00:16:15.925 ======================================================== 00:16:15.925 Latency(us) 00:16:15.925 Device Information : IOPS MiB/s Average min max 00:16:15.925 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7142.98 27.90 8962.72 1609.84 12142.80 00:16:15.925 ======================================================== 00:16:15.925 Total : 7142.98 27.90 8962.72 1609.84 12142.80 00:16:15.925 00:16:15.925 00:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.xTTaKAfWxZ 00:16:15.926 00:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:15.926 00:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:15.926 00:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:15.926 00:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.xTTaKAfWxZ 00:16:15.926 00:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:15.926 00:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=74033 00:16:15.926 00:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:15.926 00:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:15.926 00:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 74033 /var/tmp/bdevperf.sock 00:16:15.926 00:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 74033 ']' 00:16:15.926 00:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:15.926 00:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:15.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:15.926 00:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:15.926 00:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:15.926 00:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:15.926 [2024-11-19 00:00:20.680772] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:16:15.926 [2024-11-19 00:00:20.680937] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74033 ] 00:16:15.926 [2024-11-19 00:00:20.859399] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:15.926 [2024-11-19 00:00:20.957126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:15.926 [2024-11-19 00:00:21.120300] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:15.926 00:00:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:15.926 00:00:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:15.926 00:00:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.xTTaKAfWxZ 00:16:15.926 00:00:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:16:15.926 [2024-11-19 00:00:22.029145] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:15.926 TLSTESTn1 00:16:15.926 00:00:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:16:15.926 Running I/O for 10 seconds... 00:16:17.799 2945.00 IOPS, 11.50 MiB/s [2024-11-19T00:00:25.510Z] 2864.00 IOPS, 11.19 MiB/s [2024-11-19T00:00:26.446Z] 2816.00 IOPS, 11.00 MiB/s [2024-11-19T00:00:27.382Z] 2785.25 IOPS, 10.88 MiB/s [2024-11-19T00:00:28.320Z] 2774.80 IOPS, 10.84 MiB/s [2024-11-19T00:00:29.259Z] 2768.00 IOPS, 10.81 MiB/s [2024-11-19T00:00:30.637Z] 2816.00 IOPS, 11.00 MiB/s [2024-11-19T00:00:31.574Z] 2848.00 IOPS, 11.12 MiB/s [2024-11-19T00:00:32.511Z] 2872.89 IOPS, 11.22 MiB/s [2024-11-19T00:00:32.511Z] 2890.90 IOPS, 11.29 MiB/s 00:16:25.819 Latency(us) 00:16:25.819 [2024-11-19T00:00:32.511Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:25.819 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:25.819 Verification LBA range: start 0x0 length 0x2000 00:16:25.819 TLSTESTn1 : 10.04 2892.53 11.30 0.00 0.00 44152.59 7566.43 42896.29 00:16:25.819 [2024-11-19T00:00:32.511Z] =================================================================================================================== 00:16:25.819 [2024-11-19T00:00:32.511Z] Total : 2892.53 11.30 0.00 0.00 44152.59 7566.43 42896.29 00:16:25.819 { 00:16:25.819 "results": [ 00:16:25.819 { 00:16:25.819 "job": "TLSTESTn1", 00:16:25.819 "core_mask": "0x4", 00:16:25.819 "workload": "verify", 00:16:25.819 "status": "finished", 00:16:25.819 "verify_range": { 00:16:25.819 "start": 0, 00:16:25.819 "length": 8192 00:16:25.819 }, 00:16:25.819 "queue_depth": 128, 00:16:25.819 "io_size": 4096, 00:16:25.819 "runtime": 10.03794, 00:16:25.819 "iops": 2892.5257572768915, 00:16:25.819 "mibps": 11.298928739362857, 00:16:25.819 "io_failed": 0, 00:16:25.819 "io_timeout": 0, 00:16:25.819 "avg_latency_us": 44152.585670585664, 00:16:25.819 "min_latency_us": 7566.4290909090905, 00:16:25.819 "max_latency_us": 42896.29090909091 00:16:25.819 } 00:16:25.819 ], 00:16:25.819 "core_count": 1 00:16:25.819 } 00:16:25.819 00:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:25.819 00:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 74033 00:16:25.819 00:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 74033 ']' 00:16:25.819 00:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 74033 00:16:25.819 00:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:25.819 00:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:25.819 00:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74033 00:16:25.819 00:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:16:25.819 00:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:16:25.819 killing process with pid 74033 00:16:25.819 00:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74033' 00:16:25.819 00:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 74033 00:16:25.819 Received shutdown signal, test time was about 10.000000 seconds 00:16:25.819 00:16:25.819 Latency(us) 00:16:25.819 [2024-11-19T00:00:32.511Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:25.819 [2024-11-19T00:00:32.511Z] =================================================================================================================== 00:16:25.819 [2024-11-19T00:00:32.511Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:25.819 00:00:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 74033 00:16:26.757 00:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.MtbfundQT0 00:16:26.757 00:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:16:26.757 00:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.MtbfundQT0 00:16:26.757 00:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:16:26.757 00:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:26.757 00:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:16:26.757 00:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:26.757 00:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.MtbfundQT0 00:16:26.757 00:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:26.757 00:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:26.757 00:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:26.757 00:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.MtbfundQT0 00:16:26.757 00:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:26.757 00:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=74175 00:16:26.757 00:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:26.757 00:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 74175 /var/tmp/bdevperf.sock 00:16:26.757 00:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:26.757 00:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 74175 ']' 00:16:26.757 00:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:26.757 00:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:26.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:26.757 00:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:26.757 00:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:26.757 00:00:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:26.757 [2024-11-19 00:00:33.376104] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:16:26.757 [2024-11-19 00:00:33.376295] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74175 ] 00:16:27.017 [2024-11-19 00:00:33.555390] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:27.017 [2024-11-19 00:00:33.656815] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:27.276 [2024-11-19 00:00:33.830400] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:27.843 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:27.843 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:27.843 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.MtbfundQT0 00:16:28.103 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:16:28.361 [2024-11-19 00:00:34.993200] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:28.362 [2024-11-19 00:00:35.002377] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:28.362 [2024-11-19 00:00:35.003110] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (107): Transport endpoint is not connected 00:16:28.362 [2024-11-19 00:00:35.004087] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:16:28.362 [2024-11-19 00:00:35.005052] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:16:28.362 [2024-11-19 00:00:35.005101] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:16:28.362 [2024-11-19 00:00:35.005123] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:16:28.362 [2024-11-19 00:00:35.005149] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:16:28.362 request: 00:16:28.362 { 00:16:28.362 "name": "TLSTEST", 00:16:28.362 "trtype": "tcp", 00:16:28.362 "traddr": "10.0.0.3", 00:16:28.362 "adrfam": "ipv4", 00:16:28.362 "trsvcid": "4420", 00:16:28.362 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:28.362 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:28.362 "prchk_reftag": false, 00:16:28.362 "prchk_guard": false, 00:16:28.362 "hdgst": false, 00:16:28.362 "ddgst": false, 00:16:28.362 "psk": "key0", 00:16:28.362 "allow_unrecognized_csi": false, 00:16:28.362 "method": "bdev_nvme_attach_controller", 00:16:28.362 "req_id": 1 00:16:28.362 } 00:16:28.362 Got JSON-RPC error response 00:16:28.362 response: 00:16:28.362 { 00:16:28.362 "code": -5, 00:16:28.362 "message": "Input/output error" 00:16:28.362 } 00:16:28.362 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 74175 00:16:28.362 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 74175 ']' 00:16:28.362 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 74175 00:16:28.362 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:28.362 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:28.362 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74175 00:16:28.621 killing process with pid 74175 00:16:28.621 Received shutdown signal, test time was about 10.000000 seconds 00:16:28.621 00:16:28.621 Latency(us) 00:16:28.621 [2024-11-19T00:00:35.313Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:28.621 [2024-11-19T00:00:35.313Z] =================================================================================================================== 00:16:28.621 [2024-11-19T00:00:35.313Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:28.621 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:16:28.621 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:16:28.621 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74175' 00:16:28.621 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 74175 00:16:28.621 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 74175 00:16:29.188 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:16:29.188 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:16:29.188 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:29.188 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:29.188 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:29.188 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.xTTaKAfWxZ 00:16:29.188 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:16:29.188 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.xTTaKAfWxZ 00:16:29.188 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:16:29.188 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:29.188 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:16:29.188 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:29.188 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.xTTaKAfWxZ 00:16:29.188 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:29.188 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:29.188 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:16:29.188 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.xTTaKAfWxZ 00:16:29.188 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:29.189 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=74210 00:16:29.189 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:29.189 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:29.189 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 74210 /var/tmp/bdevperf.sock 00:16:29.189 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 74210 ']' 00:16:29.189 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:29.189 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:29.189 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:29.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:29.189 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:29.189 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:29.448 [2024-11-19 00:00:35.928750] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:16:29.448 [2024-11-19 00:00:35.928889] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74210 ] 00:16:29.448 [2024-11-19 00:00:36.095851] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:29.708 [2024-11-19 00:00:36.195177] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:29.708 [2024-11-19 00:00:36.358136] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:30.643 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:30.643 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:30.643 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.xTTaKAfWxZ 00:16:30.643 00:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:16:31.209 [2024-11-19 00:00:37.663753] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:31.209 [2024-11-19 00:00:37.676686] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:16:31.209 [2024-11-19 00:00:37.676762] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:16:31.209 [2024-11-19 00:00:37.676858] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:31.209 [2024-11-19 00:00:37.677629] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (107): Transport endpoint is not connected 00:16:31.209 [2024-11-19 00:00:37.678578] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:16:31.209 [2024-11-19 00:00:37.679566] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:16:31.209 [2024-11-19 00:00:37.679650] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:16:31.209 [2024-11-19 00:00:37.679673] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:16:31.209 [2024-11-19 00:00:37.679698] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:16:31.209 request: 00:16:31.209 { 00:16:31.209 "name": "TLSTEST", 00:16:31.209 "trtype": "tcp", 00:16:31.209 "traddr": "10.0.0.3", 00:16:31.209 "adrfam": "ipv4", 00:16:31.209 "trsvcid": "4420", 00:16:31.209 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:31.209 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:16:31.209 "prchk_reftag": false, 00:16:31.209 "prchk_guard": false, 00:16:31.209 "hdgst": false, 00:16:31.209 "ddgst": false, 00:16:31.209 "psk": "key0", 00:16:31.209 "allow_unrecognized_csi": false, 00:16:31.209 "method": "bdev_nvme_attach_controller", 00:16:31.209 "req_id": 1 00:16:31.209 } 00:16:31.209 Got JSON-RPC error response 00:16:31.209 response: 00:16:31.209 { 00:16:31.209 "code": -5, 00:16:31.209 "message": "Input/output error" 00:16:31.209 } 00:16:31.209 00:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 74210 00:16:31.209 00:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 74210 ']' 00:16:31.209 00:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 74210 00:16:31.209 00:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:31.209 00:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:31.209 00:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74210 00:16:31.209 killing process with pid 74210 00:16:31.209 Received shutdown signal, test time was about 10.000000 seconds 00:16:31.209 00:16:31.209 Latency(us) 00:16:31.209 [2024-11-19T00:00:37.901Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:31.209 [2024-11-19T00:00:37.901Z] =================================================================================================================== 00:16:31.209 [2024-11-19T00:00:37.901Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:31.209 00:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:16:31.209 00:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:16:31.209 00:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74210' 00:16:31.209 00:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 74210 00:16:31.209 00:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 74210 00:16:32.146 00:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:16:32.146 00:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:16:32.146 00:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:32.146 00:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:32.146 00:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:32.146 00:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.xTTaKAfWxZ 00:16:32.146 00:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:16:32.146 00:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.xTTaKAfWxZ 00:16:32.146 00:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:16:32.146 00:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:32.146 00:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:16:32.146 00:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:32.146 00:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.xTTaKAfWxZ 00:16:32.146 00:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:32.146 00:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:16:32.146 00:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:32.146 00:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.xTTaKAfWxZ 00:16:32.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:32.146 00:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:32.146 00:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=74257 00:16:32.146 00:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:32.146 00:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 74257 /var/tmp/bdevperf.sock 00:16:32.146 00:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:32.146 00:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 74257 ']' 00:16:32.146 00:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:32.146 00:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:32.146 00:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:32.146 00:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:32.146 00:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:32.146 [2024-11-19 00:00:38.750291] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:16:32.146 [2024-11-19 00:00:38.750758] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74257 ] 00:16:32.405 [2024-11-19 00:00:38.931100] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:32.405 [2024-11-19 00:00:39.024008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:32.664 [2024-11-19 00:00:39.175909] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:33.232 00:00:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:33.232 00:00:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:33.232 00:00:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.xTTaKAfWxZ 00:16:33.491 00:00:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:16:33.751 [2024-11-19 00:00:40.223449] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:33.751 [2024-11-19 00:00:40.233301] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:16:33.751 [2024-11-19 00:00:40.233349] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:16:33.751 [2024-11-19 00:00:40.233432] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:33.751 [2024-11-19 00:00:40.233616] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (107): Transport endpoint is not connected 00:16:33.751 [2024-11-19 00:00:40.234591] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:16:33.751 [2024-11-19 00:00:40.235567] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:16:33.751 [2024-11-19 00:00:40.235815] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:16:33.751 [2024-11-19 00:00:40.235841] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:16:33.751 [2024-11-19 00:00:40.235865] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:16:33.751 request: 00:16:33.751 { 00:16:33.751 "name": "TLSTEST", 00:16:33.751 "trtype": "tcp", 00:16:33.751 "traddr": "10.0.0.3", 00:16:33.751 "adrfam": "ipv4", 00:16:33.751 "trsvcid": "4420", 00:16:33.751 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:16:33.751 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:33.751 "prchk_reftag": false, 00:16:33.751 "prchk_guard": false, 00:16:33.751 "hdgst": false, 00:16:33.751 "ddgst": false, 00:16:33.751 "psk": "key0", 00:16:33.751 "allow_unrecognized_csi": false, 00:16:33.751 "method": "bdev_nvme_attach_controller", 00:16:33.751 "req_id": 1 00:16:33.751 } 00:16:33.751 Got JSON-RPC error response 00:16:33.751 response: 00:16:33.751 { 00:16:33.751 "code": -5, 00:16:33.751 "message": "Input/output error" 00:16:33.751 } 00:16:33.751 00:00:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 74257 00:16:33.751 00:00:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 74257 ']' 00:16:33.751 00:00:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 74257 00:16:33.751 00:00:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:33.751 00:00:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:33.751 00:00:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74257 00:16:33.751 killing process with pid 74257 00:16:33.751 Received shutdown signal, test time was about 10.000000 seconds 00:16:33.751 00:16:33.751 Latency(us) 00:16:33.751 [2024-11-19T00:00:40.443Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:33.751 [2024-11-19T00:00:40.443Z] =================================================================================================================== 00:16:33.751 [2024-11-19T00:00:40.443Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:33.751 00:00:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:16:33.751 00:00:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:16:33.751 00:00:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74257' 00:16:33.751 00:00:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 74257 00:16:33.751 00:00:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 74257 00:16:34.688 00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:16:34.688 00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:16:34.688 00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:34.688 00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:34.688 00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:34.688 00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:16:34.688 00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:16:34.688 00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:16:34.688 00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:16:34.688 00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:34.688 00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:16:34.688 00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:34.688 00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:16:34.688 00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:34.688 00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:34.688 00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:34.688 00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:16:34.688 00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:34.688 00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=74299 00:16:34.688 00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:34.688 00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:34.688 00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 74299 /var/tmp/bdevperf.sock 00:16:34.688 00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 74299 ']' 00:16:34.688 00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:34.688 00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:34.689 00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:34.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:34.689 00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:34.689 00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:34.689 [2024-11-19 00:00:41.187653] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:16:34.689 [2024-11-19 00:00:41.188120] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74299 ] 00:16:34.689 [2024-11-19 00:00:41.369737] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:34.948 [2024-11-19 00:00:41.469760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:34.948 [2024-11-19 00:00:41.620223] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:35.516 00:00:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:35.516 00:00:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:35.516 00:00:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:16:35.775 [2024-11-19 00:00:42.321300] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:16:35.775 [2024-11-19 00:00:42.321362] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:16:35.775 request: 00:16:35.775 { 00:16:35.775 "name": "key0", 00:16:35.775 "path": "", 00:16:35.775 "method": "keyring_file_add_key", 00:16:35.775 "req_id": 1 00:16:35.775 } 00:16:35.775 Got JSON-RPC error response 00:16:35.775 response: 00:16:35.775 { 00:16:35.775 "code": -1, 00:16:35.775 "message": "Operation not permitted" 00:16:35.775 } 00:16:35.775 00:00:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:16:36.072 [2024-11-19 00:00:42.577536] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:36.072 [2024-11-19 00:00:42.577946] bdev_nvme.c:6622:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:16:36.072 request: 00:16:36.072 { 00:16:36.072 "name": "TLSTEST", 00:16:36.072 "trtype": "tcp", 00:16:36.072 "traddr": "10.0.0.3", 00:16:36.072 "adrfam": "ipv4", 00:16:36.072 "trsvcid": "4420", 00:16:36.072 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:36.072 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:36.072 "prchk_reftag": false, 00:16:36.072 "prchk_guard": false, 00:16:36.072 "hdgst": false, 00:16:36.072 "ddgst": false, 00:16:36.072 "psk": "key0", 00:16:36.072 "allow_unrecognized_csi": false, 00:16:36.072 "method": "bdev_nvme_attach_controller", 00:16:36.072 "req_id": 1 00:16:36.072 } 00:16:36.072 Got JSON-RPC error response 00:16:36.072 response: 00:16:36.072 { 00:16:36.072 "code": -126, 00:16:36.072 "message": "Required key not available" 00:16:36.072 } 00:16:36.072 00:00:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 74299 00:16:36.072 00:00:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 74299 ']' 00:16:36.072 00:00:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 74299 00:16:36.072 00:00:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:36.072 00:00:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:36.072 00:00:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74299 00:16:36.072 killing process with pid 74299 00:16:36.072 Received shutdown signal, test time was about 10.000000 seconds 00:16:36.072 00:16:36.072 Latency(us) 00:16:36.072 [2024-11-19T00:00:42.764Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:36.072 [2024-11-19T00:00:42.764Z] =================================================================================================================== 00:16:36.072 [2024-11-19T00:00:42.764Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:36.072 00:00:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:16:36.072 00:00:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:16:36.072 00:00:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74299' 00:16:36.072 00:00:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 74299 00:16:36.072 00:00:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 74299 00:16:37.056 00:00:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:16:37.056 00:00:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:16:37.056 00:00:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:37.056 00:00:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:37.056 00:00:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:37.056 00:00:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 73788 00:16:37.056 00:00:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 73788 ']' 00:16:37.056 00:00:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 73788 00:16:37.056 00:00:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:37.056 00:00:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:37.056 00:00:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73788 00:16:37.056 killing process with pid 73788 00:16:37.056 00:00:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:37.056 00:00:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:37.056 00:00:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73788' 00:16:37.056 00:00:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 73788 00:16:37.056 00:00:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 73788 00:16:37.992 00:00:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:16:37.992 00:00:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:16:37.992 00:00:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:16:37.992 00:00:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:16:37.992 00:00:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:16:37.992 00:00:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:16:37.992 00:00:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:16:37.992 00:00:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:16:37.992 00:00:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:16:37.992 00:00:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.NefXfLmuNW 00:16:37.992 00:00:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:16:37.992 00:00:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.NefXfLmuNW 00:16:37.992 00:00:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:16:37.992 00:00:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:37.992 00:00:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:37.992 00:00:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:37.992 00:00:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=74357 00:16:37.992 00:00:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:37.992 00:00:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 74357 00:16:37.992 00:00:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 74357 ']' 00:16:37.992 00:00:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:37.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:37.992 00:00:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:37.992 00:00:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:37.992 00:00:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:37.992 00:00:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:38.251 [2024-11-19 00:00:44.805818] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:16:38.252 [2024-11-19 00:00:44.806871] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:38.511 [2024-11-19 00:00:44.983664] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:38.511 [2024-11-19 00:00:45.064578] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:38.511 [2024-11-19 00:00:45.064946] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:38.511 [2024-11-19 00:00:45.064979] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:38.511 [2024-11-19 00:00:45.065002] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:38.511 [2024-11-19 00:00:45.065015] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:38.511 [2024-11-19 00:00:45.066040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:38.770 [2024-11-19 00:00:45.224856] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:39.029 00:00:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:39.029 00:00:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:39.029 00:00:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:39.029 00:00:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:39.029 00:00:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:39.288 00:00:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:39.288 00:00:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.NefXfLmuNW 00:16:39.288 00:00:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.NefXfLmuNW 00:16:39.288 00:00:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:39.547 [2024-11-19 00:00:46.000155] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:39.547 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:39.805 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:16:40.063 [2024-11-19 00:00:46.516424] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:40.063 [2024-11-19 00:00:46.516830] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:40.063 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:40.321 malloc0 00:16:40.322 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:40.580 00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.NefXfLmuNW 00:16:40.839 00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:16:41.098 00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.NefXfLmuNW 00:16:41.098 00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:41.098 00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:41.098 00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:41.098 00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.NefXfLmuNW 00:16:41.098 00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:41.098 00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:41.098 00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=74413 00:16:41.098 00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:41.099 00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 74413 /var/tmp/bdevperf.sock 00:16:41.099 00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 74413 ']' 00:16:41.099 00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:41.099 00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:41.099 00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:41.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:41.099 00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:41.099 00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:41.099 [2024-11-19 00:00:47.711423] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:16:41.099 [2024-11-19 00:00:47.711906] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74413 ] 00:16:41.357 [2024-11-19 00:00:47.884890] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:41.357 [2024-11-19 00:00:47.989503] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:41.615 [2024-11-19 00:00:48.175233] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:42.182 00:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:42.182 00:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:42.182 00:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.NefXfLmuNW 00:16:42.440 00:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:16:42.723 [2024-11-19 00:00:49.337218] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:42.987 TLSTESTn1 00:16:42.987 00:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:16:42.987 Running I/O for 10 seconds... 00:16:45.300 2816.00 IOPS, 11.00 MiB/s [2024-11-19T00:00:52.930Z] 2987.50 IOPS, 11.67 MiB/s [2024-11-19T00:00:53.866Z] 3006.00 IOPS, 11.74 MiB/s [2024-11-19T00:00:54.803Z] 2974.00 IOPS, 11.62 MiB/s [2024-11-19T00:00:55.740Z] 2958.00 IOPS, 11.55 MiB/s [2024-11-19T00:00:56.677Z] 2945.00 IOPS, 11.50 MiB/s [2024-11-19T00:00:57.616Z] 2936.43 IOPS, 11.47 MiB/s [2024-11-19T00:00:58.995Z] 2954.38 IOPS, 11.54 MiB/s [2024-11-19T00:00:59.931Z] 2991.78 IOPS, 11.69 MiB/s [2024-11-19T00:00:59.931Z] 3020.90 IOPS, 11.80 MiB/s 00:16:53.239 Latency(us) 00:16:53.239 [2024-11-19T00:00:59.931Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:53.239 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:53.239 Verification LBA range: start 0x0 length 0x2000 00:16:53.239 TLSTESTn1 : 10.02 3027.71 11.83 0.00 0.00 42199.13 6642.97 33602.09 00:16:53.239 [2024-11-19T00:00:59.931Z] =================================================================================================================== 00:16:53.239 [2024-11-19T00:00:59.931Z] Total : 3027.71 11.83 0.00 0.00 42199.13 6642.97 33602.09 00:16:53.239 { 00:16:53.239 "results": [ 00:16:53.239 { 00:16:53.239 "job": "TLSTESTn1", 00:16:53.239 "core_mask": "0x4", 00:16:53.239 "workload": "verify", 00:16:53.239 "status": "finished", 00:16:53.239 "verify_range": { 00:16:53.239 "start": 0, 00:16:53.239 "length": 8192 00:16:53.239 }, 00:16:53.239 "queue_depth": 128, 00:16:53.239 "io_size": 4096, 00:16:53.239 "runtime": 10.019774, 00:16:53.239 "iops": 3027.7130003131806, 00:16:53.239 "mibps": 11.827003907473362, 00:16:53.239 "io_failed": 0, 00:16:53.239 "io_timeout": 0, 00:16:53.239 "avg_latency_us": 42199.12631859686, 00:16:53.239 "min_latency_us": 6642.967272727273, 00:16:53.239 "max_latency_us": 33602.09454545454 00:16:53.239 } 00:16:53.239 ], 00:16:53.239 "core_count": 1 00:16:53.239 } 00:16:53.239 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:53.239 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 74413 00:16:53.239 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 74413 ']' 00:16:53.239 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 74413 00:16:53.239 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:53.239 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:53.239 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74413 00:16:53.239 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:16:53.239 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:16:53.239 killing process with pid 74413 00:16:53.239 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74413' 00:16:53.239 Received shutdown signal, test time was about 10.000000 seconds 00:16:53.239 00:16:53.239 Latency(us) 00:16:53.239 [2024-11-19T00:00:59.931Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:53.239 [2024-11-19T00:00:59.931Z] =================================================================================================================== 00:16:53.239 [2024-11-19T00:00:59.931Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:53.239 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 74413 00:16:53.239 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 74413 00:16:54.176 00:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.NefXfLmuNW 00:16:54.176 00:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.NefXfLmuNW 00:16:54.176 00:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:16:54.176 00:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.NefXfLmuNW 00:16:54.176 00:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:16:54.176 00:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:54.176 00:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:16:54.176 00:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:54.176 00:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.NefXfLmuNW 00:16:54.176 00:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:54.176 00:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:54.176 00:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:54.176 00:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.NefXfLmuNW 00:16:54.176 00:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:54.176 00:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=74563 00:16:54.176 00:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:54.176 00:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:54.176 00:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 74563 /var/tmp/bdevperf.sock 00:16:54.176 00:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 74563 ']' 00:16:54.176 00:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:54.176 00:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:54.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:54.177 00:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:54.177 00:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:54.177 00:01:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:54.177 [2024-11-19 00:01:00.629943] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:16:54.177 [2024-11-19 00:01:00.630131] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74563 ] 00:16:54.177 [2024-11-19 00:01:00.807912] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:54.435 [2024-11-19 00:01:00.906166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:54.435 [2024-11-19 00:01:01.073865] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:55.002 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:55.002 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:55.002 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.NefXfLmuNW 00:16:55.260 [2024-11-19 00:01:01.772420] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.NefXfLmuNW': 0100666 00:16:55.260 [2024-11-19 00:01:01.772517] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:16:55.260 request: 00:16:55.260 { 00:16:55.260 "name": "key0", 00:16:55.260 "path": "/tmp/tmp.NefXfLmuNW", 00:16:55.260 "method": "keyring_file_add_key", 00:16:55.260 "req_id": 1 00:16:55.260 } 00:16:55.260 Got JSON-RPC error response 00:16:55.260 response: 00:16:55.260 { 00:16:55.260 "code": -1, 00:16:55.260 "message": "Operation not permitted" 00:16:55.261 } 00:16:55.261 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:16:55.519 [2024-11-19 00:01:02.012660] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:55.519 [2024-11-19 00:01:02.012770] bdev_nvme.c:6622:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:16:55.519 request: 00:16:55.519 { 00:16:55.519 "name": "TLSTEST", 00:16:55.519 "trtype": "tcp", 00:16:55.519 "traddr": "10.0.0.3", 00:16:55.519 "adrfam": "ipv4", 00:16:55.520 "trsvcid": "4420", 00:16:55.520 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:55.520 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:55.520 "prchk_reftag": false, 00:16:55.520 "prchk_guard": false, 00:16:55.520 "hdgst": false, 00:16:55.520 "ddgst": false, 00:16:55.520 "psk": "key0", 00:16:55.520 "allow_unrecognized_csi": false, 00:16:55.520 "method": "bdev_nvme_attach_controller", 00:16:55.520 "req_id": 1 00:16:55.520 } 00:16:55.520 Got JSON-RPC error response 00:16:55.520 response: 00:16:55.520 { 00:16:55.520 "code": -126, 00:16:55.520 "message": "Required key not available" 00:16:55.520 } 00:16:55.520 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 74563 00:16:55.520 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 74563 ']' 00:16:55.520 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 74563 00:16:55.520 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:55.520 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:55.520 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74563 00:16:55.520 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:16:55.520 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:16:55.520 killing process with pid 74563 00:16:55.520 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74563' 00:16:55.520 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 74563 00:16:55.520 Received shutdown signal, test time was about 10.000000 seconds 00:16:55.520 00:16:55.520 Latency(us) 00:16:55.520 [2024-11-19T00:01:02.212Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:55.520 [2024-11-19T00:01:02.212Z] =================================================================================================================== 00:16:55.520 [2024-11-19T00:01:02.212Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:55.520 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 74563 00:16:56.455 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:16:56.455 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:16:56.455 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:56.455 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:56.455 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:56.455 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 74357 00:16:56.455 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 74357 ']' 00:16:56.455 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 74357 00:16:56.455 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:56.455 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:56.455 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74357 00:16:56.455 killing process with pid 74357 00:16:56.455 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:56.455 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:56.455 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74357' 00:16:56.455 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 74357 00:16:56.455 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 74357 00:16:57.393 00:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:16:57.393 00:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:57.393 00:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:57.393 00:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:57.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:57.393 00:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=74615 00:16:57.393 00:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 74615 00:16:57.393 00:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:57.393 00:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 74615 ']' 00:16:57.393 00:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:57.393 00:01:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:57.393 00:01:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:57.393 00:01:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:57.393 00:01:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:57.652 [2024-11-19 00:01:04.121341] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:16:57.652 [2024-11-19 00:01:04.121522] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:57.652 [2024-11-19 00:01:04.301267] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:57.911 [2024-11-19 00:01:04.389951] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:57.911 [2024-11-19 00:01:04.390029] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:57.911 [2024-11-19 00:01:04.390063] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:57.911 [2024-11-19 00:01:04.390086] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:57.911 [2024-11-19 00:01:04.390100] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:57.911 [2024-11-19 00:01:04.391238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:57.911 [2024-11-19 00:01:04.561741] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:58.533 00:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:58.533 00:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:58.533 00:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:58.533 00:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:58.533 00:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:58.533 00:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:58.533 00:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.NefXfLmuNW 00:16:58.533 00:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:16:58.533 00:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.NefXfLmuNW 00:16:58.533 00:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:16:58.533 00:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:58.533 00:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:16:58.533 00:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:58.533 00:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.NefXfLmuNW 00:16:58.533 00:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.NefXfLmuNW 00:16:58.533 00:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:58.792 [2024-11-19 00:01:05.295990] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:58.792 00:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:59.050 00:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:16:59.309 [2024-11-19 00:01:05.804193] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:59.309 [2024-11-19 00:01:05.804570] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:59.309 00:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:59.568 malloc0 00:16:59.568 00:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:59.826 00:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.NefXfLmuNW 00:17:00.084 [2024-11-19 00:01:06.631397] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.NefXfLmuNW': 0100666 00:17:00.084 [2024-11-19 00:01:06.631463] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:17:00.084 request: 00:17:00.084 { 00:17:00.084 "name": "key0", 00:17:00.084 "path": "/tmp/tmp.NefXfLmuNW", 00:17:00.084 "method": "keyring_file_add_key", 00:17:00.084 "req_id": 1 00:17:00.084 } 00:17:00.084 Got JSON-RPC error response 00:17:00.084 response: 00:17:00.084 { 00:17:00.084 "code": -1, 00:17:00.084 "message": "Operation not permitted" 00:17:00.084 } 00:17:00.084 00:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:17:00.343 [2024-11-19 00:01:06.871514] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:17:00.343 [2024-11-19 00:01:06.871623] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:17:00.343 request: 00:17:00.343 { 00:17:00.343 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:00.343 "host": "nqn.2016-06.io.spdk:host1", 00:17:00.343 "psk": "key0", 00:17:00.343 "method": "nvmf_subsystem_add_host", 00:17:00.343 "req_id": 1 00:17:00.343 } 00:17:00.343 Got JSON-RPC error response 00:17:00.343 response: 00:17:00.343 { 00:17:00.343 "code": -32603, 00:17:00.343 "message": "Internal error" 00:17:00.343 } 00:17:00.343 00:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:17:00.343 00:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:00.343 00:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:00.343 00:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:00.343 00:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 74615 00:17:00.343 00:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 74615 ']' 00:17:00.343 00:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 74615 00:17:00.343 00:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:00.343 00:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:00.343 00:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74615 00:17:00.343 00:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:00.343 00:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:00.343 killing process with pid 74615 00:17:00.343 00:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74615' 00:17:00.343 00:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 74615 00:17:00.343 00:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 74615 00:17:01.280 00:01:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.NefXfLmuNW 00:17:01.280 00:01:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:17:01.280 00:01:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:01.280 00:01:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:01.280 00:01:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:01.280 00:01:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=74691 00:17:01.280 00:01:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:01.280 00:01:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 74691 00:17:01.280 00:01:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 74691 ']' 00:17:01.280 00:01:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:01.280 00:01:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:01.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:01.280 00:01:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:01.280 00:01:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:01.280 00:01:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:01.539 [2024-11-19 00:01:08.022050] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:17:01.539 [2024-11-19 00:01:08.022221] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:01.539 [2024-11-19 00:01:08.205547] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:01.798 [2024-11-19 00:01:08.301323] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:01.798 [2024-11-19 00:01:08.301405] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:01.798 [2024-11-19 00:01:08.301441] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:01.798 [2024-11-19 00:01:08.301480] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:01.798 [2024-11-19 00:01:08.301495] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:01.798 [2024-11-19 00:01:08.302753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:01.798 [2024-11-19 00:01:08.479284] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:02.365 00:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:02.365 00:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:02.365 00:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:02.365 00:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:02.365 00:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:02.365 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:02.365 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.NefXfLmuNW 00:17:02.365 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.NefXfLmuNW 00:17:02.365 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:02.623 [2024-11-19 00:01:09.285164] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:02.623 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:02.882 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:17:03.141 [2024-11-19 00:01:09.805365] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:03.141 [2024-11-19 00:01:09.805709] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:03.141 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:03.708 malloc0 00:17:03.708 00:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:03.708 00:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.NefXfLmuNW 00:17:03.967 00:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:17:04.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:04.534 00:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=74752 00:17:04.534 00:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:04.534 00:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:04.534 00:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 74752 /var/tmp/bdevperf.sock 00:17:04.534 00:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 74752 ']' 00:17:04.534 00:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:04.534 00:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:04.534 00:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:04.534 00:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:04.534 00:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:04.534 [2024-11-19 00:01:11.062295] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:17:04.535 [2024-11-19 00:01:11.062664] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74752 ] 00:17:04.793 [2024-11-19 00:01:11.238461] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:04.793 [2024-11-19 00:01:11.355875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:05.052 [2024-11-19 00:01:11.533928] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:05.624 00:01:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:05.624 00:01:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:05.624 00:01:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.NefXfLmuNW 00:17:05.882 00:01:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:17:06.140 [2024-11-19 00:01:12.679545] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:06.140 TLSTESTn1 00:17:06.140 00:01:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:17:06.708 00:01:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:17:06.708 "subsystems": [ 00:17:06.708 { 00:17:06.708 "subsystem": "keyring", 00:17:06.708 "config": [ 00:17:06.708 { 00:17:06.708 "method": "keyring_file_add_key", 00:17:06.708 "params": { 00:17:06.708 "name": "key0", 00:17:06.708 "path": "/tmp/tmp.NefXfLmuNW" 00:17:06.708 } 00:17:06.708 } 00:17:06.708 ] 00:17:06.708 }, 00:17:06.708 { 00:17:06.708 "subsystem": "iobuf", 00:17:06.708 "config": [ 00:17:06.708 { 00:17:06.708 "method": "iobuf_set_options", 00:17:06.708 "params": { 00:17:06.708 "small_pool_count": 8192, 00:17:06.708 "large_pool_count": 1024, 00:17:06.708 "small_bufsize": 8192, 00:17:06.708 "large_bufsize": 135168, 00:17:06.708 "enable_numa": false 00:17:06.708 } 00:17:06.708 } 00:17:06.708 ] 00:17:06.708 }, 00:17:06.708 { 00:17:06.708 "subsystem": "sock", 00:17:06.708 "config": [ 00:17:06.708 { 00:17:06.708 "method": "sock_set_default_impl", 00:17:06.708 "params": { 00:17:06.708 "impl_name": "uring" 00:17:06.708 } 00:17:06.708 }, 00:17:06.708 { 00:17:06.708 "method": "sock_impl_set_options", 00:17:06.708 "params": { 00:17:06.708 "impl_name": "ssl", 00:17:06.708 "recv_buf_size": 4096, 00:17:06.708 "send_buf_size": 4096, 00:17:06.708 "enable_recv_pipe": true, 00:17:06.708 "enable_quickack": false, 00:17:06.708 "enable_placement_id": 0, 00:17:06.708 "enable_zerocopy_send_server": true, 00:17:06.708 "enable_zerocopy_send_client": false, 00:17:06.708 "zerocopy_threshold": 0, 00:17:06.708 "tls_version": 0, 00:17:06.708 "enable_ktls": false 00:17:06.708 } 00:17:06.708 }, 00:17:06.708 { 00:17:06.708 "method": "sock_impl_set_options", 00:17:06.708 "params": { 00:17:06.708 "impl_name": "posix", 00:17:06.708 "recv_buf_size": 2097152, 00:17:06.708 "send_buf_size": 2097152, 00:17:06.708 "enable_recv_pipe": true, 00:17:06.708 "enable_quickack": false, 00:17:06.708 "enable_placement_id": 0, 00:17:06.708 "enable_zerocopy_send_server": true, 00:17:06.708 "enable_zerocopy_send_client": false, 00:17:06.708 "zerocopy_threshold": 0, 00:17:06.708 "tls_version": 0, 00:17:06.708 "enable_ktls": false 00:17:06.708 } 00:17:06.708 }, 00:17:06.708 { 00:17:06.708 "method": "sock_impl_set_options", 00:17:06.708 "params": { 00:17:06.708 "impl_name": "uring", 00:17:06.708 "recv_buf_size": 2097152, 00:17:06.708 "send_buf_size": 2097152, 00:17:06.708 "enable_recv_pipe": true, 00:17:06.708 "enable_quickack": false, 00:17:06.708 "enable_placement_id": 0, 00:17:06.708 "enable_zerocopy_send_server": false, 00:17:06.708 "enable_zerocopy_send_client": false, 00:17:06.708 "zerocopy_threshold": 0, 00:17:06.708 "tls_version": 0, 00:17:06.708 "enable_ktls": false 00:17:06.708 } 00:17:06.708 } 00:17:06.708 ] 00:17:06.708 }, 00:17:06.708 { 00:17:06.708 "subsystem": "vmd", 00:17:06.708 "config": [] 00:17:06.708 }, 00:17:06.708 { 00:17:06.708 "subsystem": "accel", 00:17:06.708 "config": [ 00:17:06.708 { 00:17:06.708 "method": "accel_set_options", 00:17:06.708 "params": { 00:17:06.708 "small_cache_size": 128, 00:17:06.708 "large_cache_size": 16, 00:17:06.709 "task_count": 2048, 00:17:06.709 "sequence_count": 2048, 00:17:06.709 "buf_count": 2048 00:17:06.709 } 00:17:06.709 } 00:17:06.709 ] 00:17:06.709 }, 00:17:06.709 { 00:17:06.709 "subsystem": "bdev", 00:17:06.709 "config": [ 00:17:06.709 { 00:17:06.709 "method": "bdev_set_options", 00:17:06.709 "params": { 00:17:06.709 "bdev_io_pool_size": 65535, 00:17:06.709 "bdev_io_cache_size": 256, 00:17:06.709 "bdev_auto_examine": true, 00:17:06.709 "iobuf_small_cache_size": 128, 00:17:06.709 "iobuf_large_cache_size": 16 00:17:06.709 } 00:17:06.709 }, 00:17:06.709 { 00:17:06.709 "method": "bdev_raid_set_options", 00:17:06.709 "params": { 00:17:06.709 "process_window_size_kb": 1024, 00:17:06.709 "process_max_bandwidth_mb_sec": 0 00:17:06.709 } 00:17:06.709 }, 00:17:06.709 { 00:17:06.709 "method": "bdev_iscsi_set_options", 00:17:06.709 "params": { 00:17:06.709 "timeout_sec": 30 00:17:06.709 } 00:17:06.709 }, 00:17:06.709 { 00:17:06.709 "method": "bdev_nvme_set_options", 00:17:06.709 "params": { 00:17:06.709 "action_on_timeout": "none", 00:17:06.709 "timeout_us": 0, 00:17:06.709 "timeout_admin_us": 0, 00:17:06.709 "keep_alive_timeout_ms": 10000, 00:17:06.709 "arbitration_burst": 0, 00:17:06.709 "low_priority_weight": 0, 00:17:06.709 "medium_priority_weight": 0, 00:17:06.709 "high_priority_weight": 0, 00:17:06.709 "nvme_adminq_poll_period_us": 10000, 00:17:06.709 "nvme_ioq_poll_period_us": 0, 00:17:06.709 "io_queue_requests": 0, 00:17:06.709 "delay_cmd_submit": true, 00:17:06.709 "transport_retry_count": 4, 00:17:06.709 "bdev_retry_count": 3, 00:17:06.709 "transport_ack_timeout": 0, 00:17:06.709 "ctrlr_loss_timeout_sec": 0, 00:17:06.709 "reconnect_delay_sec": 0, 00:17:06.709 "fast_io_fail_timeout_sec": 0, 00:17:06.709 "disable_auto_failback": false, 00:17:06.709 "generate_uuids": false, 00:17:06.709 "transport_tos": 0, 00:17:06.709 "nvme_error_stat": false, 00:17:06.709 "rdma_srq_size": 0, 00:17:06.709 "io_path_stat": false, 00:17:06.709 "allow_accel_sequence": false, 00:17:06.709 "rdma_max_cq_size": 0, 00:17:06.709 "rdma_cm_event_timeout_ms": 0, 00:17:06.709 "dhchap_digests": [ 00:17:06.709 "sha256", 00:17:06.709 "sha384", 00:17:06.709 "sha512" 00:17:06.709 ], 00:17:06.709 "dhchap_dhgroups": [ 00:17:06.709 "null", 00:17:06.709 "ffdhe2048", 00:17:06.709 "ffdhe3072", 00:17:06.709 "ffdhe4096", 00:17:06.709 "ffdhe6144", 00:17:06.709 "ffdhe8192" 00:17:06.709 ] 00:17:06.709 } 00:17:06.709 }, 00:17:06.709 { 00:17:06.709 "method": "bdev_nvme_set_hotplug", 00:17:06.709 "params": { 00:17:06.709 "period_us": 100000, 00:17:06.709 "enable": false 00:17:06.709 } 00:17:06.709 }, 00:17:06.709 { 00:17:06.709 "method": "bdev_malloc_create", 00:17:06.709 "params": { 00:17:06.709 "name": "malloc0", 00:17:06.709 "num_blocks": 8192, 00:17:06.709 "block_size": 4096, 00:17:06.709 "physical_block_size": 4096, 00:17:06.709 "uuid": "ea1303a9-79f2-485b-9c63-98e9b0d4a518", 00:17:06.709 "optimal_io_boundary": 0, 00:17:06.709 "md_size": 0, 00:17:06.709 "dif_type": 0, 00:17:06.709 "dif_is_head_of_md": false, 00:17:06.709 "dif_pi_format": 0 00:17:06.709 } 00:17:06.709 }, 00:17:06.709 { 00:17:06.709 "method": "bdev_wait_for_examine" 00:17:06.709 } 00:17:06.709 ] 00:17:06.709 }, 00:17:06.709 { 00:17:06.709 "subsystem": "nbd", 00:17:06.709 "config": [] 00:17:06.709 }, 00:17:06.709 { 00:17:06.709 "subsystem": "scheduler", 00:17:06.709 "config": [ 00:17:06.709 { 00:17:06.709 "method": "framework_set_scheduler", 00:17:06.709 "params": { 00:17:06.709 "name": "static" 00:17:06.709 } 00:17:06.709 } 00:17:06.709 ] 00:17:06.709 }, 00:17:06.709 { 00:17:06.709 "subsystem": "nvmf", 00:17:06.709 "config": [ 00:17:06.709 { 00:17:06.709 "method": "nvmf_set_config", 00:17:06.709 "params": { 00:17:06.709 "discovery_filter": "match_any", 00:17:06.709 "admin_cmd_passthru": { 00:17:06.709 "identify_ctrlr": false 00:17:06.709 }, 00:17:06.709 "dhchap_digests": [ 00:17:06.709 "sha256", 00:17:06.709 "sha384", 00:17:06.709 "sha512" 00:17:06.709 ], 00:17:06.709 "dhchap_dhgroups": [ 00:17:06.709 "null", 00:17:06.709 "ffdhe2048", 00:17:06.709 "ffdhe3072", 00:17:06.709 "ffdhe4096", 00:17:06.709 "ffdhe6144", 00:17:06.709 "ffdhe8192" 00:17:06.709 ] 00:17:06.709 } 00:17:06.709 }, 00:17:06.709 { 00:17:06.709 "method": "nvmf_set_max_subsystems", 00:17:06.709 "params": { 00:17:06.709 "max_subsystems": 1024 00:17:06.709 } 00:17:06.709 }, 00:17:06.709 { 00:17:06.709 "method": "nvmf_set_crdt", 00:17:06.709 "params": { 00:17:06.709 "crdt1": 0, 00:17:06.709 "crdt2": 0, 00:17:06.709 "crdt3": 0 00:17:06.709 } 00:17:06.709 }, 00:17:06.709 { 00:17:06.709 "method": "nvmf_create_transport", 00:17:06.709 "params": { 00:17:06.709 "trtype": "TCP", 00:17:06.709 "max_queue_depth": 128, 00:17:06.709 "max_io_qpairs_per_ctrlr": 127, 00:17:06.709 "in_capsule_data_size": 4096, 00:17:06.709 "max_io_size": 131072, 00:17:06.709 "io_unit_size": 131072, 00:17:06.709 "max_aq_depth": 128, 00:17:06.709 "num_shared_buffers": 511, 00:17:06.709 "buf_cache_size": 4294967295, 00:17:06.709 "dif_insert_or_strip": false, 00:17:06.709 "zcopy": false, 00:17:06.709 "c2h_success": false, 00:17:06.709 "sock_priority": 0, 00:17:06.709 "abort_timeout_sec": 1, 00:17:06.709 "ack_timeout": 0, 00:17:06.709 "data_wr_pool_size": 0 00:17:06.709 } 00:17:06.709 }, 00:17:06.709 { 00:17:06.709 "method": "nvmf_create_subsystem", 00:17:06.709 "params": { 00:17:06.709 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:06.709 "allow_any_host": false, 00:17:06.709 "serial_number": "SPDK00000000000001", 00:17:06.709 "model_number": "SPDK bdev Controller", 00:17:06.709 "max_namespaces": 10, 00:17:06.709 "min_cntlid": 1, 00:17:06.709 "max_cntlid": 65519, 00:17:06.709 "ana_reporting": false 00:17:06.709 } 00:17:06.709 }, 00:17:06.709 { 00:17:06.709 "method": "nvmf_subsystem_add_host", 00:17:06.709 "params": { 00:17:06.709 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:06.709 "host": "nqn.2016-06.io.spdk:host1", 00:17:06.709 "psk": "key0" 00:17:06.709 } 00:17:06.709 }, 00:17:06.709 { 00:17:06.709 "method": "nvmf_subsystem_add_ns", 00:17:06.709 "params": { 00:17:06.709 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:06.709 "namespace": { 00:17:06.709 "nsid": 1, 00:17:06.709 "bdev_name": "malloc0", 00:17:06.709 "nguid": "EA1303A979F2485B9C6398E9B0D4A518", 00:17:06.709 "uuid": "ea1303a9-79f2-485b-9c63-98e9b0d4a518", 00:17:06.709 "no_auto_visible": false 00:17:06.709 } 00:17:06.709 } 00:17:06.709 }, 00:17:06.709 { 00:17:06.709 "method": "nvmf_subsystem_add_listener", 00:17:06.709 "params": { 00:17:06.709 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:06.709 "listen_address": { 00:17:06.709 "trtype": "TCP", 00:17:06.709 "adrfam": "IPv4", 00:17:06.709 "traddr": "10.0.0.3", 00:17:06.709 "trsvcid": "4420" 00:17:06.709 }, 00:17:06.709 "secure_channel": true 00:17:06.709 } 00:17:06.709 } 00:17:06.709 ] 00:17:06.709 } 00:17:06.709 ] 00:17:06.709 }' 00:17:06.709 00:01:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:17:06.969 00:01:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:17:06.969 "subsystems": [ 00:17:06.969 { 00:17:06.969 "subsystem": "keyring", 00:17:06.969 "config": [ 00:17:06.969 { 00:17:06.969 "method": "keyring_file_add_key", 00:17:06.969 "params": { 00:17:06.969 "name": "key0", 00:17:06.969 "path": "/tmp/tmp.NefXfLmuNW" 00:17:06.969 } 00:17:06.969 } 00:17:06.969 ] 00:17:06.969 }, 00:17:06.969 { 00:17:06.969 "subsystem": "iobuf", 00:17:06.969 "config": [ 00:17:06.969 { 00:17:06.969 "method": "iobuf_set_options", 00:17:06.969 "params": { 00:17:06.969 "small_pool_count": 8192, 00:17:06.969 "large_pool_count": 1024, 00:17:06.969 "small_bufsize": 8192, 00:17:06.969 "large_bufsize": 135168, 00:17:06.969 "enable_numa": false 00:17:06.969 } 00:17:06.969 } 00:17:06.969 ] 00:17:06.969 }, 00:17:06.969 { 00:17:06.969 "subsystem": "sock", 00:17:06.969 "config": [ 00:17:06.969 { 00:17:06.969 "method": "sock_set_default_impl", 00:17:06.969 "params": { 00:17:06.969 "impl_name": "uring" 00:17:06.969 } 00:17:06.969 }, 00:17:06.969 { 00:17:06.969 "method": "sock_impl_set_options", 00:17:06.969 "params": { 00:17:06.969 "impl_name": "ssl", 00:17:06.969 "recv_buf_size": 4096, 00:17:06.969 "send_buf_size": 4096, 00:17:06.969 "enable_recv_pipe": true, 00:17:06.969 "enable_quickack": false, 00:17:06.969 "enable_placement_id": 0, 00:17:06.969 "enable_zerocopy_send_server": true, 00:17:06.969 "enable_zerocopy_send_client": false, 00:17:06.969 "zerocopy_threshold": 0, 00:17:06.969 "tls_version": 0, 00:17:06.969 "enable_ktls": false 00:17:06.969 } 00:17:06.969 }, 00:17:06.969 { 00:17:06.969 "method": "sock_impl_set_options", 00:17:06.969 "params": { 00:17:06.969 "impl_name": "posix", 00:17:06.969 "recv_buf_size": 2097152, 00:17:06.969 "send_buf_size": 2097152, 00:17:06.969 "enable_recv_pipe": true, 00:17:06.969 "enable_quickack": false, 00:17:06.969 "enable_placement_id": 0, 00:17:06.969 "enable_zerocopy_send_server": true, 00:17:06.969 "enable_zerocopy_send_client": false, 00:17:06.969 "zerocopy_threshold": 0, 00:17:06.969 "tls_version": 0, 00:17:06.969 "enable_ktls": false 00:17:06.969 } 00:17:06.969 }, 00:17:06.969 { 00:17:06.969 "method": "sock_impl_set_options", 00:17:06.969 "params": { 00:17:06.969 "impl_name": "uring", 00:17:06.969 "recv_buf_size": 2097152, 00:17:06.969 "send_buf_size": 2097152, 00:17:06.969 "enable_recv_pipe": true, 00:17:06.969 "enable_quickack": false, 00:17:06.969 "enable_placement_id": 0, 00:17:06.969 "enable_zerocopy_send_server": false, 00:17:06.969 "enable_zerocopy_send_client": false, 00:17:06.969 "zerocopy_threshold": 0, 00:17:06.969 "tls_version": 0, 00:17:06.969 "enable_ktls": false 00:17:06.969 } 00:17:06.969 } 00:17:06.969 ] 00:17:06.969 }, 00:17:06.969 { 00:17:06.969 "subsystem": "vmd", 00:17:06.969 "config": [] 00:17:06.969 }, 00:17:06.969 { 00:17:06.969 "subsystem": "accel", 00:17:06.969 "config": [ 00:17:06.969 { 00:17:06.969 "method": "accel_set_options", 00:17:06.969 "params": { 00:17:06.969 "small_cache_size": 128, 00:17:06.969 "large_cache_size": 16, 00:17:06.969 "task_count": 2048, 00:17:06.969 "sequence_count": 2048, 00:17:06.969 "buf_count": 2048 00:17:06.969 } 00:17:06.969 } 00:17:06.969 ] 00:17:06.969 }, 00:17:06.969 { 00:17:06.969 "subsystem": "bdev", 00:17:06.969 "config": [ 00:17:06.969 { 00:17:06.969 "method": "bdev_set_options", 00:17:06.969 "params": { 00:17:06.969 "bdev_io_pool_size": 65535, 00:17:06.969 "bdev_io_cache_size": 256, 00:17:06.969 "bdev_auto_examine": true, 00:17:06.969 "iobuf_small_cache_size": 128, 00:17:06.969 "iobuf_large_cache_size": 16 00:17:06.969 } 00:17:06.969 }, 00:17:06.969 { 00:17:06.969 "method": "bdev_raid_set_options", 00:17:06.969 "params": { 00:17:06.969 "process_window_size_kb": 1024, 00:17:06.969 "process_max_bandwidth_mb_sec": 0 00:17:06.969 } 00:17:06.969 }, 00:17:06.969 { 00:17:06.970 "method": "bdev_iscsi_set_options", 00:17:06.970 "params": { 00:17:06.970 "timeout_sec": 30 00:17:06.970 } 00:17:06.970 }, 00:17:06.970 { 00:17:06.970 "method": "bdev_nvme_set_options", 00:17:06.970 "params": { 00:17:06.970 "action_on_timeout": "none", 00:17:06.970 "timeout_us": 0, 00:17:06.970 "timeout_admin_us": 0, 00:17:06.970 "keep_alive_timeout_ms": 10000, 00:17:06.970 "arbitration_burst": 0, 00:17:06.970 "low_priority_weight": 0, 00:17:06.970 "medium_priority_weight": 0, 00:17:06.970 "high_priority_weight": 0, 00:17:06.970 "nvme_adminq_poll_period_us": 10000, 00:17:06.970 "nvme_ioq_poll_period_us": 0, 00:17:06.970 "io_queue_requests": 512, 00:17:06.970 "delay_cmd_submit": true, 00:17:06.970 "transport_retry_count": 4, 00:17:06.970 "bdev_retry_count": 3, 00:17:06.970 "transport_ack_timeout": 0, 00:17:06.970 "ctrlr_loss_timeout_sec": 0, 00:17:06.970 "reconnect_delay_sec": 0, 00:17:06.970 "fast_io_fail_timeout_sec": 0, 00:17:06.970 "disable_auto_failback": false, 00:17:06.970 "generate_uuids": false, 00:17:06.970 "transport_tos": 0, 00:17:06.970 "nvme_error_stat": false, 00:17:06.970 "rdma_srq_size": 0, 00:17:06.970 "io_path_stat": false, 00:17:06.970 "allow_accel_sequence": false, 00:17:06.970 "rdma_max_cq_size": 0, 00:17:06.970 "rdma_cm_event_timeout_ms": 0, 00:17:06.970 "dhchap_digests": [ 00:17:06.970 "sha256", 00:17:06.970 "sha384", 00:17:06.970 "sha512" 00:17:06.970 ], 00:17:06.970 "dhchap_dhgroups": [ 00:17:06.970 "null", 00:17:06.970 "ffdhe2048", 00:17:06.970 "ffdhe3072", 00:17:06.970 "ffdhe4096", 00:17:06.970 "ffdhe6144", 00:17:06.970 "ffdhe8192" 00:17:06.970 ] 00:17:06.970 } 00:17:06.970 }, 00:17:06.970 { 00:17:06.970 "method": "bdev_nvme_attach_controller", 00:17:06.970 "params": { 00:17:06.970 "name": "TLSTEST", 00:17:06.970 "trtype": "TCP", 00:17:06.970 "adrfam": "IPv4", 00:17:06.970 "traddr": "10.0.0.3", 00:17:06.970 "trsvcid": "4420", 00:17:06.970 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:06.970 "prchk_reftag": false, 00:17:06.970 "prchk_guard": false, 00:17:06.970 "ctrlr_loss_timeout_sec": 0, 00:17:06.970 "reconnect_delay_sec": 0, 00:17:06.970 "fast_io_fail_timeout_sec": 0, 00:17:06.970 "psk": "key0", 00:17:06.970 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:06.970 "hdgst": false, 00:17:06.970 "ddgst": false, 00:17:06.970 "multipath": "multipath" 00:17:06.970 } 00:17:06.970 }, 00:17:06.970 { 00:17:06.970 "method": "bdev_nvme_set_hotplug", 00:17:06.970 "params": { 00:17:06.970 "period_us": 100000, 00:17:06.970 "enable": false 00:17:06.970 } 00:17:06.970 }, 00:17:06.970 { 00:17:06.970 "method": "bdev_wait_for_examine" 00:17:06.970 } 00:17:06.970 ] 00:17:06.970 }, 00:17:06.970 { 00:17:06.970 "subsystem": "nbd", 00:17:06.970 "config": [] 00:17:06.970 } 00:17:06.970 ] 00:17:06.970 }' 00:17:06.970 00:01:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 74752 00:17:06.970 00:01:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 74752 ']' 00:17:06.970 00:01:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 74752 00:17:06.970 00:01:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:06.970 00:01:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:06.970 00:01:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74752 00:17:06.970 killing process with pid 74752 00:17:06.970 Received shutdown signal, test time was about 10.000000 seconds 00:17:06.970 00:17:06.970 Latency(us) 00:17:06.970 [2024-11-19T00:01:13.662Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:06.970 [2024-11-19T00:01:13.662Z] =================================================================================================================== 00:17:06.970 [2024-11-19T00:01:13.662Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:06.970 00:01:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:06.970 00:01:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:06.970 00:01:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74752' 00:17:06.970 00:01:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 74752 00:17:06.970 00:01:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 74752 00:17:07.908 00:01:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 74691 00:17:07.908 00:01:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 74691 ']' 00:17:07.908 00:01:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 74691 00:17:07.908 00:01:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:07.908 00:01:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:07.908 00:01:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74691 00:17:07.908 killing process with pid 74691 00:17:07.908 00:01:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:07.908 00:01:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:07.908 00:01:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74691' 00:17:07.908 00:01:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 74691 00:17:07.908 00:01:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 74691 00:17:09.285 00:01:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:17:09.285 00:01:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:09.285 00:01:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:09.285 00:01:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:17:09.286 "subsystems": [ 00:17:09.286 { 00:17:09.286 "subsystem": "keyring", 00:17:09.286 "config": [ 00:17:09.286 { 00:17:09.286 "method": "keyring_file_add_key", 00:17:09.286 "params": { 00:17:09.286 "name": "key0", 00:17:09.286 "path": "/tmp/tmp.NefXfLmuNW" 00:17:09.286 } 00:17:09.286 } 00:17:09.286 ] 00:17:09.286 }, 00:17:09.286 { 00:17:09.286 "subsystem": "iobuf", 00:17:09.286 "config": [ 00:17:09.286 { 00:17:09.286 "method": "iobuf_set_options", 00:17:09.286 "params": { 00:17:09.286 "small_pool_count": 8192, 00:17:09.286 "large_pool_count": 1024, 00:17:09.286 "small_bufsize": 8192, 00:17:09.286 "large_bufsize": 135168, 00:17:09.286 "enable_numa": false 00:17:09.286 } 00:17:09.286 } 00:17:09.286 ] 00:17:09.286 }, 00:17:09.286 { 00:17:09.286 "subsystem": "sock", 00:17:09.286 "config": [ 00:17:09.286 { 00:17:09.286 "method": "sock_set_default_impl", 00:17:09.286 "params": { 00:17:09.286 "impl_name": "uring" 00:17:09.286 } 00:17:09.286 }, 00:17:09.286 { 00:17:09.286 "method": "sock_impl_set_options", 00:17:09.286 "params": { 00:17:09.286 "impl_name": "ssl", 00:17:09.286 "recv_buf_size": 4096, 00:17:09.286 "send_buf_size": 4096, 00:17:09.286 "enable_recv_pipe": true, 00:17:09.286 "enable_quickack": false, 00:17:09.286 "enable_placement_id": 0, 00:17:09.286 "enable_zerocopy_send_server": true, 00:17:09.286 "enable_zerocopy_send_client": false, 00:17:09.286 "zerocopy_threshold": 0, 00:17:09.286 "tls_version": 0, 00:17:09.286 "enable_ktls": false 00:17:09.286 } 00:17:09.286 }, 00:17:09.286 { 00:17:09.286 "method": "sock_impl_set_options", 00:17:09.286 "params": { 00:17:09.286 "impl_name": "posix", 00:17:09.286 "recv_buf_size": 2097152, 00:17:09.286 "send_buf_size": 2097152, 00:17:09.286 "enable_recv_pipe": true, 00:17:09.286 "enable_quickack": false, 00:17:09.286 "enable_placement_id": 0, 00:17:09.286 "enable_zerocopy_send_server": true, 00:17:09.286 "enable_zerocopy_send_client": false, 00:17:09.286 "zerocopy_threshold": 0, 00:17:09.286 "tls_version": 0, 00:17:09.286 "enable_ktls": false 00:17:09.286 } 00:17:09.286 }, 00:17:09.286 { 00:17:09.286 "method": "sock_impl_set_options", 00:17:09.286 "params": { 00:17:09.286 "impl_name": "uring", 00:17:09.286 "recv_buf_size": 2097152, 00:17:09.286 "send_buf_size": 2097152, 00:17:09.286 "enable_recv_pipe": true, 00:17:09.286 "enable_quickack": false, 00:17:09.286 "enable_placement_id": 0, 00:17:09.286 "enable_zerocopy_send_server": false, 00:17:09.286 "enable_zerocopy_send_client": false, 00:17:09.286 "zerocopy_threshold": 0, 00:17:09.286 "tls_version": 0, 00:17:09.286 "enable_ktls": false 00:17:09.286 } 00:17:09.286 } 00:17:09.286 ] 00:17:09.286 }, 00:17:09.286 { 00:17:09.286 "subsystem": "vmd", 00:17:09.286 "config": [] 00:17:09.286 }, 00:17:09.286 { 00:17:09.286 "subsystem": "accel", 00:17:09.286 "config": [ 00:17:09.286 { 00:17:09.286 "method": "accel_set_options", 00:17:09.286 "params": { 00:17:09.286 "small_cache_size": 128, 00:17:09.286 "large_cache_size": 16, 00:17:09.286 "task_count": 2048, 00:17:09.286 "sequence_count": 2048, 00:17:09.286 "buf_count": 2048 00:17:09.286 } 00:17:09.286 } 00:17:09.286 ] 00:17:09.286 }, 00:17:09.286 { 00:17:09.286 "subsystem": "bdev", 00:17:09.286 "config": [ 00:17:09.286 { 00:17:09.286 "method": "bdev_set_options", 00:17:09.286 "params": { 00:17:09.286 "bdev_io_pool_size": 65535, 00:17:09.286 "bdev_io_cache_size": 256, 00:17:09.286 "bdev_auto_examine": true, 00:17:09.286 "iobuf_small_cache_size": 128, 00:17:09.286 "iobuf_large_cache_size": 16 00:17:09.286 } 00:17:09.286 }, 00:17:09.286 { 00:17:09.286 "method": "bdev_raid_set_options", 00:17:09.286 "params": { 00:17:09.286 "process_window_size_kb": 1024, 00:17:09.286 "process_max_bandwidth_mb_sec": 0 00:17:09.286 } 00:17:09.286 }, 00:17:09.286 { 00:17:09.286 "method": "bdev_iscsi_set_options", 00:17:09.286 "params": { 00:17:09.286 "timeout_sec": 30 00:17:09.286 } 00:17:09.286 }, 00:17:09.286 { 00:17:09.286 "method": "bdev_nvme_set_options", 00:17:09.286 "params": { 00:17:09.286 "action_on_timeout": "none", 00:17:09.286 "timeout_us": 0, 00:17:09.286 "timeout_admin_us": 0, 00:17:09.286 "keep_alive_timeout_ms": 10000, 00:17:09.286 "arbitration_burst": 0, 00:17:09.286 "low_priority_weight": 0, 00:17:09.286 "medium_priority_weight": 0, 00:17:09.286 "high_priority_weight": 0, 00:17:09.286 "nvme_adminq_poll_period_us": 10000, 00:17:09.286 "nvme_ioq_poll_period_us": 0, 00:17:09.286 "io_queue_requests": 0, 00:17:09.286 "delay_cmd_submit": true, 00:17:09.286 "transport_retry_count": 4, 00:17:09.286 "bdev_retry_count": 3, 00:17:09.286 "transport_ack_timeout": 0, 00:17:09.286 "ctrlr_loss_timeout_sec": 0, 00:17:09.286 "reconnect_delay_sec": 0, 00:17:09.286 "fast_io_fail_timeout_sec": 0, 00:17:09.286 "disable_auto_failback": false, 00:17:09.286 "generate_uuids": false, 00:17:09.286 "transport_tos": 0, 00:17:09.286 "nvme_error_stat": false, 00:17:09.286 "rdma_srq_size": 0, 00:17:09.286 "io_path_stat": false, 00:17:09.286 "allow_accel_sequence": false, 00:17:09.286 "rdma_max_cq_size": 0, 00:17:09.286 "rdma_cm_event_timeout_ms": 0, 00:17:09.286 "dhchap_digests": [ 00:17:09.286 "sha256", 00:17:09.286 "sha384", 00:17:09.286 "sha512" 00:17:09.286 ], 00:17:09.286 "dhchap_dhgroups": [ 00:17:09.286 "null", 00:17:09.286 "ffdhe2048", 00:17:09.286 "ffdhe3072", 00:17:09.286 "ffdhe4096", 00:17:09.286 "ffdhe6144", 00:17:09.286 "ffdhe8192" 00:17:09.286 ] 00:17:09.286 } 00:17:09.286 }, 00:17:09.286 { 00:17:09.286 "method": "bdev_nvme_set_hotplug", 00:17:09.286 "params": { 00:17:09.286 "period_us": 100000, 00:17:09.286 "enable": false 00:17:09.286 } 00:17:09.286 }, 00:17:09.286 { 00:17:09.286 "method": "bdev_malloc_create", 00:17:09.286 "params": { 00:17:09.286 "name": "malloc0", 00:17:09.286 "num_blocks": 8192, 00:17:09.286 "block_size": 4096, 00:17:09.286 "physical_block_size": 4096, 00:17:09.286 "uuid": "ea1303a9-79f2-485b-9c63-98e9b0d4a518", 00:17:09.286 "optimal_io_boundary": 0, 00:17:09.286 "md_size": 0, 00:17:09.286 "dif_type": 0, 00:17:09.286 "dif_is_head_of_md": false, 00:17:09.286 "dif_pi_format": 0 00:17:09.287 } 00:17:09.287 }, 00:17:09.287 { 00:17:09.287 "method": "bdev_wait_for_examine" 00:17:09.287 } 00:17:09.287 ] 00:17:09.287 }, 00:17:09.287 { 00:17:09.287 "subsystem": "nbd", 00:17:09.287 "config": [] 00:17:09.287 }, 00:17:09.287 { 00:17:09.287 "subsystem": "scheduler", 00:17:09.287 "config": [ 00:17:09.287 { 00:17:09.287 "method": "framework_set_scheduler", 00:17:09.287 "params": { 00:17:09.287 "name": "static" 00:17:09.287 } 00:17:09.287 } 00:17:09.287 ] 00:17:09.287 }, 00:17:09.287 { 00:17:09.287 "subsystem": "nvmf", 00:17:09.287 "config": [ 00:17:09.287 { 00:17:09.287 "method": "nvmf_set_config", 00:17:09.287 "params": { 00:17:09.287 "discovery_filter": "match_any", 00:17:09.287 "admin_cmd_passthru": { 00:17:09.287 "identify_ctrlr": false 00:17:09.287 }, 00:17:09.287 "dhchap_digests": [ 00:17:09.287 "sha256", 00:17:09.287 "sha384", 00:17:09.287 "sha512" 00:17:09.287 ], 00:17:09.287 "dhchap_dhgroups": [ 00:17:09.287 "null", 00:17:09.287 "ffdhe2048", 00:17:09.287 "ffdhe3072", 00:17:09.287 "ffdhe4096", 00:17:09.287 "ffdhe6144", 00:17:09.287 "ffdhe8192" 00:17:09.287 ] 00:17:09.287 } 00:17:09.287 }, 00:17:09.287 { 00:17:09.287 "method": "nvmf_set_max_subsystems", 00:17:09.287 "params": { 00:17:09.287 "max_subsystems": 1024 00:17:09.287 } 00:17:09.287 }, 00:17:09.287 { 00:17:09.287 "method": "nvmf_set_crdt", 00:17:09.287 "params": { 00:17:09.287 "crdt1": 0, 00:17:09.287 "crdt2": 0, 00:17:09.287 "crdt3": 0 00:17:09.287 } 00:17:09.287 }, 00:17:09.287 { 00:17:09.287 "method": "nvmf_create_transport", 00:17:09.287 "params": { 00:17:09.287 "trtype": "TCP", 00:17:09.287 "max_queue_depth": 128, 00:17:09.287 "max_io_qpairs_per_ctrlr": 127, 00:17:09.287 "in_capsule_data_size": 4096, 00:17:09.287 "max_io_size": 131072, 00:17:09.287 "io_unit_size": 131072, 00:17:09.287 "max_aq_depth": 128, 00:17:09.287 "num_shared_buffers": 511, 00:17:09.287 "buf_cache_size": 4294967295, 00:17:09.287 "dif_insert_or_strip": false, 00:17:09.287 "zcopy": false, 00:17:09.287 "c2h_success": false, 00:17:09.287 "sock_priority": 0, 00:17:09.287 "abort_timeout_sec": 1, 00:17:09.287 "ack_timeout": 0, 00:17:09.287 "data_wr_pool_size": 0 00:17:09.287 } 00:17:09.287 }, 00:17:09.287 { 00:17:09.287 "method": "nvmf_create_subsystem", 00:17:09.287 "params": { 00:17:09.287 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:09.287 "allow_any_host": false, 00:17:09.287 "serial_number": "SPDK00000000000001", 00:17:09.287 "model_number": "SPDK bdev Controller", 00:17:09.287 "max_namespaces": 10, 00:17:09.287 "min_cntlid": 1, 00:17:09.287 "max_cntlid": 65519, 00:17:09.287 "ana_reporting": false 00:17:09.287 } 00:17:09.287 }, 00:17:09.287 { 00:17:09.287 "method": "nvmf_subsystem_add_host", 00:17:09.287 "params": { 00:17:09.287 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:09.287 "host": "nqn.2016-06.io.spdk:host1", 00:17:09.287 "psk": "key0" 00:17:09.287 } 00:17:09.287 }, 00:17:09.287 { 00:17:09.287 "method": "nvmf_subsystem_add_ns", 00:17:09.287 00:01:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:09.287 "params": { 00:17:09.287 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:09.287 "namespace": { 00:17:09.287 "nsid": 1, 00:17:09.287 "bdev_name": "malloc0", 00:17:09.287 "nguid": "EA1303A979F2485B9C6398E9B0D4A518", 00:17:09.287 "uuid": "ea1303a9-79f2-485b-9c63-98e9b0d4a518", 00:17:09.287 "no_auto_visible": false 00:17:09.287 } 00:17:09.287 } 00:17:09.287 }, 00:17:09.287 { 00:17:09.287 "method": "nvmf_subsystem_add_listener", 00:17:09.287 "params": { 00:17:09.287 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:09.287 "listen_address": { 00:17:09.287 "trtype": "TCP", 00:17:09.287 "adrfam": "IPv4", 00:17:09.287 "traddr": "10.0.0.3", 00:17:09.287 "trsvcid": "4420" 00:17:09.287 }, 00:17:09.287 "secure_channel": true 00:17:09.287 } 00:17:09.287 } 00:17:09.287 ] 00:17:09.287 } 00:17:09.287 ] 00:17:09.287 }' 00:17:09.287 00:01:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=74820 00:17:09.287 00:01:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 74820 00:17:09.287 00:01:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:17:09.287 00:01:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 74820 ']' 00:17:09.287 00:01:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:09.287 00:01:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:09.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:09.287 00:01:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:09.287 00:01:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:09.287 00:01:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:09.287 [2024-11-19 00:01:15.710361] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:17:09.287 [2024-11-19 00:01:15.710505] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:09.287 [2024-11-19 00:01:15.883225] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:09.563 [2024-11-19 00:01:15.990544] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:09.563 [2024-11-19 00:01:15.990954] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:09.563 [2024-11-19 00:01:15.991009] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:09.563 [2024-11-19 00:01:15.991053] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:09.563 [2024-11-19 00:01:15.991080] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:09.563 [2024-11-19 00:01:15.992386] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:09.834 [2024-11-19 00:01:16.296523] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:09.834 [2024-11-19 00:01:16.466766] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:09.834 [2024-11-19 00:01:16.498694] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:09.834 [2024-11-19 00:01:16.498982] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:10.093 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:10.093 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:10.093 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:10.093 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:10.093 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:10.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:10.093 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:10.093 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=74852 00:17:10.093 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 74852 /var/tmp/bdevperf.sock 00:17:10.093 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 74852 ']' 00:17:10.093 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:10.093 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:10.093 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:17:10.093 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:10.093 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:17:10.093 "subsystems": [ 00:17:10.093 { 00:17:10.093 "subsystem": "keyring", 00:17:10.093 "config": [ 00:17:10.093 { 00:17:10.093 "method": "keyring_file_add_key", 00:17:10.093 "params": { 00:17:10.093 "name": "key0", 00:17:10.093 "path": "/tmp/tmp.NefXfLmuNW" 00:17:10.093 } 00:17:10.093 } 00:17:10.093 ] 00:17:10.093 }, 00:17:10.093 { 00:17:10.093 "subsystem": "iobuf", 00:17:10.093 "config": [ 00:17:10.093 { 00:17:10.093 "method": "iobuf_set_options", 00:17:10.093 "params": { 00:17:10.093 "small_pool_count": 8192, 00:17:10.093 "large_pool_count": 1024, 00:17:10.093 "small_bufsize": 8192, 00:17:10.093 "large_bufsize": 135168, 00:17:10.093 "enable_numa": false 00:17:10.093 } 00:17:10.093 } 00:17:10.093 ] 00:17:10.093 }, 00:17:10.093 { 00:17:10.093 "subsystem": "sock", 00:17:10.093 "config": [ 00:17:10.093 { 00:17:10.093 "method": "sock_set_default_impl", 00:17:10.093 "params": { 00:17:10.093 "impl_name": "uring" 00:17:10.093 } 00:17:10.093 }, 00:17:10.093 { 00:17:10.093 "method": "sock_impl_set_options", 00:17:10.093 "params": { 00:17:10.093 "impl_name": "ssl", 00:17:10.093 "recv_buf_size": 4096, 00:17:10.093 "send_buf_size": 4096, 00:17:10.093 "enable_recv_pipe": true, 00:17:10.093 "enable_quickack": false, 00:17:10.093 "enable_placement_id": 0, 00:17:10.093 "enable_zerocopy_send_server": true, 00:17:10.093 "enable_zerocopy_send_client": false, 00:17:10.093 "zerocopy_threshold": 0, 00:17:10.093 "tls_version": 0, 00:17:10.093 "enable_ktls": false 00:17:10.093 } 00:17:10.093 }, 00:17:10.093 { 00:17:10.093 "method": "sock_impl_set_options", 00:17:10.093 "params": { 00:17:10.093 "impl_name": "posix", 00:17:10.093 "recv_buf_size": 2097152, 00:17:10.093 "send_buf_size": 2097152, 00:17:10.093 "enable_recv_pipe": true, 00:17:10.093 "enable_quickack": false, 00:17:10.093 "enable_placement_id": 0, 00:17:10.093 "enable_zerocopy_send_server": true, 00:17:10.093 "enable_zerocopy_send_client": false, 00:17:10.093 "zerocopy_threshold": 0, 00:17:10.093 "tls_version": 0, 00:17:10.093 "enable_ktls": false 00:17:10.093 } 00:17:10.093 }, 00:17:10.093 { 00:17:10.093 "method": "sock_impl_set_options", 00:17:10.093 "params": { 00:17:10.093 "impl_name": "uring", 00:17:10.093 "recv_buf_size": 2097152, 00:17:10.093 "send_buf_size": 2097152, 00:17:10.093 "enable_recv_pipe": true, 00:17:10.093 "enable_quickack": false, 00:17:10.093 "enable_placement_id": 0, 00:17:10.093 "enable_zerocopy_send_server": false, 00:17:10.093 "enable_zerocopy_send_client": false, 00:17:10.093 "zerocopy_threshold": 0, 00:17:10.093 "tls_version": 0, 00:17:10.093 "enable_ktls": false 00:17:10.093 } 00:17:10.093 } 00:17:10.093 ] 00:17:10.093 }, 00:17:10.093 { 00:17:10.093 "subsystem": "vmd", 00:17:10.093 "config": [] 00:17:10.093 }, 00:17:10.093 { 00:17:10.093 "subsystem": "accel", 00:17:10.093 "config": [ 00:17:10.093 { 00:17:10.093 "method": "accel_set_options", 00:17:10.093 "params": { 00:17:10.093 "small_cache_size": 128, 00:17:10.093 "large_cache_size": 16, 00:17:10.093 "task_count": 2048, 00:17:10.093 "sequence_count": 2048, 00:17:10.094 "buf_count": 2048 00:17:10.094 } 00:17:10.094 } 00:17:10.094 ] 00:17:10.094 }, 00:17:10.094 { 00:17:10.094 "subsystem": "bdev", 00:17:10.094 "config": [ 00:17:10.094 { 00:17:10.094 "method": "bdev_set_options", 00:17:10.094 "params": { 00:17:10.094 "bdev_io_pool_size": 65535, 00:17:10.094 "bdev_io_cache_size": 256, 00:17:10.094 "bdev_auto_examine": true, 00:17:10.094 "iobuf_small_cache_size": 128, 00:17:10.094 "iobuf_large_cache_size": 16 00:17:10.094 } 00:17:10.094 }, 00:17:10.094 { 00:17:10.094 "method": "bdev_raid_set_options", 00:17:10.094 "params": { 00:17:10.094 "process_window_size_kb": 1024, 00:17:10.094 "process_max_bandwidth_mb_sec": 0 00:17:10.094 } 00:17:10.094 }, 00:17:10.094 { 00:17:10.094 "method": "bdev_iscsi_set_options", 00:17:10.094 "params": { 00:17:10.094 "timeout_sec": 30 00:17:10.094 } 00:17:10.094 }, 00:17:10.094 { 00:17:10.094 "method": "bdev_nvme_set_options", 00:17:10.094 "params": { 00:17:10.094 "action_on_timeout": "none", 00:17:10.094 "timeout_us": 0, 00:17:10.094 "timeout_admin_us": 0, 00:17:10.094 "keep_alive_timeout_ms": 10000, 00:17:10.094 "arbitration_burst": 0, 00:17:10.094 "low_priority_weight": 0, 00:17:10.094 "medium_priority_weight": 0, 00:17:10.094 "high_priority_weight": 0, 00:17:10.094 "nvme_adminq_poll_period_us": 10000, 00:17:10.094 "nvme_ioq_poll_period_us": 0, 00:17:10.094 "io_queue_requests": 512, 00:17:10.094 "delay_cmd_submit": true, 00:17:10.094 "transport_retry_count": 4, 00:17:10.094 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:10.094 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:10.094 "bdev_retry_count": 3, 00:17:10.094 "transport_ack_timeout": 0, 00:17:10.094 "ctrlr_loss_timeout_sec": 0, 00:17:10.094 "reconnect_delay_sec": 0, 00:17:10.094 "fast_io_fail_timeout_sec": 0, 00:17:10.094 "disable_auto_failback": false, 00:17:10.094 "generate_uuids": false, 00:17:10.094 "transport_tos": 0, 00:17:10.094 "nvme_error_stat": false, 00:17:10.094 "rdma_srq_size": 0, 00:17:10.094 "io_path_stat": false, 00:17:10.094 "allow_accel_sequence": false, 00:17:10.094 "rdma_max_cq_size": 0, 00:17:10.094 "rdma_cm_event_timeout_ms": 0, 00:17:10.094 "dhchap_digests": [ 00:17:10.094 "sha256", 00:17:10.094 "sha384", 00:17:10.094 "sha512" 00:17:10.094 ], 00:17:10.094 "dhchap_dhgroups": [ 00:17:10.094 "null", 00:17:10.094 "ffdhe2048", 00:17:10.094 "ffdhe3072", 00:17:10.094 "ffdhe4096", 00:17:10.094 "ffdhe6144", 00:17:10.094 "ffdhe8192" 00:17:10.094 ] 00:17:10.094 } 00:17:10.094 }, 00:17:10.094 { 00:17:10.094 "method": "bdev_nvme_attach_controller", 00:17:10.094 "params": { 00:17:10.094 "name": "TLSTEST", 00:17:10.094 "trtype": "TCP", 00:17:10.094 "adrfam": "IPv4", 00:17:10.094 "traddr": "10.0.0.3", 00:17:10.094 "trsvcid": "4420", 00:17:10.094 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:10.094 "prchk_reftag": false, 00:17:10.094 "prchk_guard": false, 00:17:10.094 "ctrlr_loss_timeout_sec": 0, 00:17:10.094 "reconnect_delay_sec": 0, 00:17:10.094 "fast_io_fail_timeout_sec": 0, 00:17:10.094 "psk": "key0", 00:17:10.094 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:10.094 "hdgst": false, 00:17:10.094 "ddgst": false, 00:17:10.094 "multipath": "multipath" 00:17:10.094 } 00:17:10.094 }, 00:17:10.094 { 00:17:10.094 "method": "bdev_nvme_set_hotplug", 00:17:10.094 "params": { 00:17:10.094 "period_us": 100000, 00:17:10.094 "enable": false 00:17:10.094 } 00:17:10.094 }, 00:17:10.094 { 00:17:10.094 "method": "bdev_wait_for_examine" 00:17:10.094 } 00:17:10.094 ] 00:17:10.094 }, 00:17:10.094 { 00:17:10.094 "subsystem": "nbd", 00:17:10.094 "config": [] 00:17:10.094 } 00:17:10.094 ] 00:17:10.094 }' 00:17:10.353 [2024-11-19 00:01:16.839954] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:17:10.353 [2024-11-19 00:01:16.840333] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74852 ] 00:17:10.353 [2024-11-19 00:01:17.024006] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:10.612 [2024-11-19 00:01:17.149060] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:10.870 [2024-11-19 00:01:17.424346] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:10.870 [2024-11-19 00:01:17.537007] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:11.438 00:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:11.438 00:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:11.438 00:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:11.438 Running I/O for 10 seconds... 00:17:13.311 2944.00 IOPS, 11.50 MiB/s [2024-11-19T00:01:21.380Z] 2979.50 IOPS, 11.64 MiB/s [2024-11-19T00:01:22.316Z] 3002.67 IOPS, 11.73 MiB/s [2024-11-19T00:01:23.250Z] 3008.00 IOPS, 11.75 MiB/s [2024-11-19T00:01:24.186Z] 3020.80 IOPS, 11.80 MiB/s [2024-11-19T00:01:25.120Z] 3008.00 IOPS, 11.75 MiB/s [2024-11-19T00:01:26.055Z] 3017.14 IOPS, 11.79 MiB/s [2024-11-19T00:01:26.991Z] 3040.00 IOPS, 11.88 MiB/s [2024-11-19T00:01:28.368Z] 3024.33 IOPS, 11.81 MiB/s [2024-11-19T00:01:28.368Z] 2996.30 IOPS, 11.70 MiB/s 00:17:21.676 Latency(us) 00:17:21.676 [2024-11-19T00:01:28.368Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:21.676 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:21.676 Verification LBA range: start 0x0 length 0x2000 00:17:21.676 TLSTESTn1 : 10.03 3000.85 11.72 0.00 0.00 42555.95 2710.81 28835.84 00:17:21.676 [2024-11-19T00:01:28.368Z] =================================================================================================================== 00:17:21.676 [2024-11-19T00:01:28.368Z] Total : 3000.85 11.72 0.00 0.00 42555.95 2710.81 28835.84 00:17:21.676 { 00:17:21.676 "results": [ 00:17:21.676 { 00:17:21.676 "job": "TLSTESTn1", 00:17:21.676 "core_mask": "0x4", 00:17:21.676 "workload": "verify", 00:17:21.676 "status": "finished", 00:17:21.676 "verify_range": { 00:17:21.676 "start": 0, 00:17:21.676 "length": 8192 00:17:21.676 }, 00:17:21.676 "queue_depth": 128, 00:17:21.676 "io_size": 4096, 00:17:21.676 "runtime": 10.027162, 00:17:21.676 "iops": 3000.8490936917146, 00:17:21.676 "mibps": 11.72206677223326, 00:17:21.676 "io_failed": 0, 00:17:21.676 "io_timeout": 0, 00:17:21.676 "avg_latency_us": 42555.949060455, 00:17:21.676 "min_latency_us": 2710.807272727273, 00:17:21.676 "max_latency_us": 28835.84 00:17:21.676 } 00:17:21.676 ], 00:17:21.676 "core_count": 1 00:17:21.676 } 00:17:21.676 00:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:21.676 00:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 74852 00:17:21.676 00:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 74852 ']' 00:17:21.676 00:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 74852 00:17:21.676 00:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:21.676 00:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:21.676 00:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74852 00:17:21.676 killing process with pid 74852 00:17:21.676 Received shutdown signal, test time was about 10.000000 seconds 00:17:21.676 00:17:21.676 Latency(us) 00:17:21.676 [2024-11-19T00:01:28.368Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:21.676 [2024-11-19T00:01:28.368Z] =================================================================================================================== 00:17:21.676 [2024-11-19T00:01:28.368Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:21.676 00:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:21.676 00:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:21.676 00:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74852' 00:17:21.676 00:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 74852 00:17:21.676 00:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 74852 00:17:22.613 00:01:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 74820 00:17:22.613 00:01:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 74820 ']' 00:17:22.613 00:01:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 74820 00:17:22.613 00:01:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:22.613 00:01:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:22.613 00:01:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74820 00:17:22.613 killing process with pid 74820 00:17:22.613 00:01:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:22.613 00:01:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:22.613 00:01:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74820' 00:17:22.613 00:01:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 74820 00:17:22.613 00:01:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 74820 00:17:23.549 00:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:17:23.549 00:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:23.549 00:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:23.549 00:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:23.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:23.549 00:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=75010 00:17:23.549 00:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:17:23.549 00:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 75010 00:17:23.549 00:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 75010 ']' 00:17:23.549 00:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:23.549 00:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:23.549 00:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:23.549 00:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:23.550 00:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:23.550 [2024-11-19 00:01:30.214220] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:17:23.550 [2024-11-19 00:01:30.214352] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:23.808 [2024-11-19 00:01:30.396250] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:24.066 [2024-11-19 00:01:30.520812] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:24.066 [2024-11-19 00:01:30.520884] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:24.066 [2024-11-19 00:01:30.520910] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:24.066 [2024-11-19 00:01:30.520940] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:24.066 [2024-11-19 00:01:30.520957] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:24.066 [2024-11-19 00:01:30.522375] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:24.066 [2024-11-19 00:01:30.702005] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:24.633 00:01:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:24.633 00:01:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:24.633 00:01:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:24.633 00:01:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:24.633 00:01:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:24.633 00:01:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:24.633 00:01:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.NefXfLmuNW 00:17:24.633 00:01:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.NefXfLmuNW 00:17:24.633 00:01:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:24.892 [2024-11-19 00:01:31.490898] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:24.892 00:01:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:25.151 00:01:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:17:25.411 [2024-11-19 00:01:32.015204] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:25.411 [2024-11-19 00:01:32.015590] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:25.411 00:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:25.679 malloc0 00:17:25.679 00:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:25.952 00:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.NefXfLmuNW 00:17:26.212 00:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:17:26.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:26.471 00:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=75065 00:17:26.471 00:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:17:26.471 00:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:26.471 00:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 75065 /var/tmp/bdevperf.sock 00:17:26.471 00:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 75065 ']' 00:17:26.471 00:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:26.471 00:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:26.471 00:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:26.471 00:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:26.471 00:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:26.730 [2024-11-19 00:01:33.235506] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:17:26.730 [2024-11-19 00:01:33.235921] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75065 ] 00:17:26.730 [2024-11-19 00:01:33.411869] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:26.988 [2024-11-19 00:01:33.536557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:27.247 [2024-11-19 00:01:33.708039] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:27.506 00:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:27.506 00:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:27.506 00:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.NefXfLmuNW 00:17:27.765 00:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:17:28.025 [2024-11-19 00:01:34.661968] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:28.284 nvme0n1 00:17:28.284 00:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:28.284 Running I/O for 1 seconds... 00:17:29.482 3072.00 IOPS, 12.00 MiB/s 00:17:29.482 Latency(us) 00:17:29.482 [2024-11-19T00:01:36.174Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:29.482 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:29.482 Verification LBA range: start 0x0 length 0x2000 00:17:29.482 nvme0n1 : 1.04 3073.20 12.00 0.00 0.00 41064.15 8102.63 25856.93 00:17:29.482 [2024-11-19T00:01:36.174Z] =================================================================================================================== 00:17:29.482 [2024-11-19T00:01:36.174Z] Total : 3073.20 12.00 0.00 0.00 41064.15 8102.63 25856.93 00:17:29.482 { 00:17:29.482 "results": [ 00:17:29.482 { 00:17:29.482 "job": "nvme0n1", 00:17:29.482 "core_mask": "0x2", 00:17:29.482 "workload": "verify", 00:17:29.482 "status": "finished", 00:17:29.482 "verify_range": { 00:17:29.482 "start": 0, 00:17:29.482 "length": 8192 00:17:29.482 }, 00:17:29.482 "queue_depth": 128, 00:17:29.482 "io_size": 4096, 00:17:29.482 "runtime": 1.04126, 00:17:29.482 "iops": 3073.199777193016, 00:17:29.482 "mibps": 12.004686629660219, 00:17:29.482 "io_failed": 0, 00:17:29.482 "io_timeout": 0, 00:17:29.482 "avg_latency_us": 41064.15010909091, 00:17:29.482 "min_latency_us": 8102.632727272728, 00:17:29.482 "max_latency_us": 25856.93090909091 00:17:29.482 } 00:17:29.482 ], 00:17:29.482 "core_count": 1 00:17:29.482 } 00:17:29.482 00:01:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 75065 00:17:29.482 00:01:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 75065 ']' 00:17:29.482 00:01:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 75065 00:17:29.482 00:01:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:29.482 00:01:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:29.482 00:01:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75065 00:17:29.482 killing process with pid 75065 00:17:29.482 Received shutdown signal, test time was about 1.000000 seconds 00:17:29.482 00:17:29.482 Latency(us) 00:17:29.482 [2024-11-19T00:01:36.174Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:29.482 [2024-11-19T00:01:36.174Z] =================================================================================================================== 00:17:29.482 [2024-11-19T00:01:36.174Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:29.482 00:01:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:29.482 00:01:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:29.482 00:01:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75065' 00:17:29.482 00:01:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 75065 00:17:29.482 00:01:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 75065 00:17:30.420 00:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 75010 00:17:30.420 00:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 75010 ']' 00:17:30.420 00:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 75010 00:17:30.420 00:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:30.420 00:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:30.420 00:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75010 00:17:30.420 killing process with pid 75010 00:17:30.420 00:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:30.420 00:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:30.420 00:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75010' 00:17:30.420 00:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 75010 00:17:30.420 00:01:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 75010 00:17:31.358 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:17:31.358 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:31.358 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:31.358 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:31.358 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=75135 00:17:31.358 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:17:31.358 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 75135 00:17:31.358 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 75135 ']' 00:17:31.358 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:31.358 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:31.358 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:31.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:31.358 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:31.358 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:31.358 [2024-11-19 00:01:37.874265] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:17:31.358 [2024-11-19 00:01:37.874715] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:31.617 [2024-11-19 00:01:38.048524] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:31.617 [2024-11-19 00:01:38.137290] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:31.617 [2024-11-19 00:01:38.137394] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:31.617 [2024-11-19 00:01:38.137414] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:31.617 [2024-11-19 00:01:38.137450] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:31.617 [2024-11-19 00:01:38.137464] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:31.617 [2024-11-19 00:01:38.138772] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:31.617 [2024-11-19 00:01:38.302030] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:32.186 00:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:32.186 00:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:32.186 00:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:32.186 00:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:32.186 00:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:32.445 00:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:32.445 00:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:17:32.445 00:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.445 00:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:32.445 [2024-11-19 00:01:38.888981] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:32.445 malloc0 00:17:32.445 [2024-11-19 00:01:38.938639] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:32.445 [2024-11-19 00:01:38.939018] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:32.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:32.446 00:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.446 00:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=75167 00:17:32.446 00:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:17:32.446 00:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 75167 /var/tmp/bdevperf.sock 00:17:32.446 00:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 75167 ']' 00:17:32.446 00:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:32.446 00:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:32.446 00:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:32.446 00:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:32.446 00:01:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:32.446 [2024-11-19 00:01:39.055026] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:17:32.446 [2024-11-19 00:01:39.055450] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75167 ] 00:17:32.704 [2024-11-19 00:01:39.230941] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:32.704 [2024-11-19 00:01:39.358315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:32.962 [2024-11-19 00:01:39.529859] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:33.531 00:01:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:33.531 00:01:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:33.531 00:01:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.NefXfLmuNW 00:17:33.790 00:01:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:17:34.050 [2024-11-19 00:01:40.532048] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:34.050 nvme0n1 00:17:34.050 00:01:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:34.310 Running I/O for 1 seconds... 00:17:35.248 2966.00 IOPS, 11.59 MiB/s 00:17:35.248 Latency(us) 00:17:35.248 [2024-11-19T00:01:41.940Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:35.248 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:35.248 Verification LBA range: start 0x0 length 0x2000 00:17:35.248 nvme0n1 : 1.03 3014.64 11.78 0.00 0.00 41772.65 3336.38 25856.93 00:17:35.248 [2024-11-19T00:01:41.940Z] =================================================================================================================== 00:17:35.248 [2024-11-19T00:01:41.940Z] Total : 3014.64 11.78 0.00 0.00 41772.65 3336.38 25856.93 00:17:35.248 { 00:17:35.248 "results": [ 00:17:35.248 { 00:17:35.248 "job": "nvme0n1", 00:17:35.248 "core_mask": "0x2", 00:17:35.248 "workload": "verify", 00:17:35.248 "status": "finished", 00:17:35.248 "verify_range": { 00:17:35.248 "start": 0, 00:17:35.248 "length": 8192 00:17:35.248 }, 00:17:35.248 "queue_depth": 128, 00:17:35.248 "io_size": 4096, 00:17:35.248 "runtime": 1.026655, 00:17:35.248 "iops": 3014.644646935923, 00:17:35.248 "mibps": 11.77595565209345, 00:17:35.248 "io_failed": 0, 00:17:35.248 "io_timeout": 0, 00:17:35.248 "avg_latency_us": 41772.645437509185, 00:17:35.248 "min_latency_us": 3336.378181818182, 00:17:35.248 "max_latency_us": 25856.93090909091 00:17:35.248 } 00:17:35.248 ], 00:17:35.248 "core_count": 1 00:17:35.248 } 00:17:35.248 00:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:17:35.248 00:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.248 00:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:35.508 00:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.508 00:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:17:35.508 "subsystems": [ 00:17:35.508 { 00:17:35.508 "subsystem": "keyring", 00:17:35.508 "config": [ 00:17:35.508 { 00:17:35.508 "method": "keyring_file_add_key", 00:17:35.508 "params": { 00:17:35.508 "name": "key0", 00:17:35.508 "path": "/tmp/tmp.NefXfLmuNW" 00:17:35.508 } 00:17:35.508 } 00:17:35.508 ] 00:17:35.508 }, 00:17:35.508 { 00:17:35.508 "subsystem": "iobuf", 00:17:35.508 "config": [ 00:17:35.508 { 00:17:35.508 "method": "iobuf_set_options", 00:17:35.508 "params": { 00:17:35.508 "small_pool_count": 8192, 00:17:35.508 "large_pool_count": 1024, 00:17:35.508 "small_bufsize": 8192, 00:17:35.508 "large_bufsize": 135168, 00:17:35.508 "enable_numa": false 00:17:35.508 } 00:17:35.508 } 00:17:35.508 ] 00:17:35.508 }, 00:17:35.508 { 00:17:35.508 "subsystem": "sock", 00:17:35.508 "config": [ 00:17:35.508 { 00:17:35.508 "method": "sock_set_default_impl", 00:17:35.508 "params": { 00:17:35.508 "impl_name": "uring" 00:17:35.508 } 00:17:35.508 }, 00:17:35.508 { 00:17:35.508 "method": "sock_impl_set_options", 00:17:35.508 "params": { 00:17:35.508 "impl_name": "ssl", 00:17:35.508 "recv_buf_size": 4096, 00:17:35.508 "send_buf_size": 4096, 00:17:35.508 "enable_recv_pipe": true, 00:17:35.508 "enable_quickack": false, 00:17:35.508 "enable_placement_id": 0, 00:17:35.508 "enable_zerocopy_send_server": true, 00:17:35.508 "enable_zerocopy_send_client": false, 00:17:35.508 "zerocopy_threshold": 0, 00:17:35.508 "tls_version": 0, 00:17:35.508 "enable_ktls": false 00:17:35.508 } 00:17:35.508 }, 00:17:35.508 { 00:17:35.508 "method": "sock_impl_set_options", 00:17:35.508 "params": { 00:17:35.508 "impl_name": "posix", 00:17:35.508 "recv_buf_size": 2097152, 00:17:35.508 "send_buf_size": 2097152, 00:17:35.508 "enable_recv_pipe": true, 00:17:35.508 "enable_quickack": false, 00:17:35.508 "enable_placement_id": 0, 00:17:35.508 "enable_zerocopy_send_server": true, 00:17:35.508 "enable_zerocopy_send_client": false, 00:17:35.508 "zerocopy_threshold": 0, 00:17:35.508 "tls_version": 0, 00:17:35.508 "enable_ktls": false 00:17:35.508 } 00:17:35.508 }, 00:17:35.508 { 00:17:35.508 "method": "sock_impl_set_options", 00:17:35.508 "params": { 00:17:35.508 "impl_name": "uring", 00:17:35.508 "recv_buf_size": 2097152, 00:17:35.508 "send_buf_size": 2097152, 00:17:35.508 "enable_recv_pipe": true, 00:17:35.508 "enable_quickack": false, 00:17:35.508 "enable_placement_id": 0, 00:17:35.508 "enable_zerocopy_send_server": false, 00:17:35.508 "enable_zerocopy_send_client": false, 00:17:35.508 "zerocopy_threshold": 0, 00:17:35.508 "tls_version": 0, 00:17:35.508 "enable_ktls": false 00:17:35.508 } 00:17:35.508 } 00:17:35.508 ] 00:17:35.508 }, 00:17:35.508 { 00:17:35.508 "subsystem": "vmd", 00:17:35.508 "config": [] 00:17:35.508 }, 00:17:35.508 { 00:17:35.508 "subsystem": "accel", 00:17:35.508 "config": [ 00:17:35.508 { 00:17:35.508 "method": "accel_set_options", 00:17:35.508 "params": { 00:17:35.508 "small_cache_size": 128, 00:17:35.508 "large_cache_size": 16, 00:17:35.508 "task_count": 2048, 00:17:35.508 "sequence_count": 2048, 00:17:35.508 "buf_count": 2048 00:17:35.508 } 00:17:35.508 } 00:17:35.508 ] 00:17:35.508 }, 00:17:35.508 { 00:17:35.508 "subsystem": "bdev", 00:17:35.508 "config": [ 00:17:35.508 { 00:17:35.508 "method": "bdev_set_options", 00:17:35.508 "params": { 00:17:35.508 "bdev_io_pool_size": 65535, 00:17:35.508 "bdev_io_cache_size": 256, 00:17:35.508 "bdev_auto_examine": true, 00:17:35.508 "iobuf_small_cache_size": 128, 00:17:35.508 "iobuf_large_cache_size": 16 00:17:35.508 } 00:17:35.508 }, 00:17:35.508 { 00:17:35.508 "method": "bdev_raid_set_options", 00:17:35.508 "params": { 00:17:35.508 "process_window_size_kb": 1024, 00:17:35.508 "process_max_bandwidth_mb_sec": 0 00:17:35.508 } 00:17:35.508 }, 00:17:35.508 { 00:17:35.508 "method": "bdev_iscsi_set_options", 00:17:35.508 "params": { 00:17:35.508 "timeout_sec": 30 00:17:35.508 } 00:17:35.508 }, 00:17:35.508 { 00:17:35.508 "method": "bdev_nvme_set_options", 00:17:35.508 "params": { 00:17:35.508 "action_on_timeout": "none", 00:17:35.508 "timeout_us": 0, 00:17:35.508 "timeout_admin_us": 0, 00:17:35.508 "keep_alive_timeout_ms": 10000, 00:17:35.508 "arbitration_burst": 0, 00:17:35.508 "low_priority_weight": 0, 00:17:35.508 "medium_priority_weight": 0, 00:17:35.508 "high_priority_weight": 0, 00:17:35.508 "nvme_adminq_poll_period_us": 10000, 00:17:35.508 "nvme_ioq_poll_period_us": 0, 00:17:35.508 "io_queue_requests": 0, 00:17:35.508 "delay_cmd_submit": true, 00:17:35.508 "transport_retry_count": 4, 00:17:35.508 "bdev_retry_count": 3, 00:17:35.508 "transport_ack_timeout": 0, 00:17:35.508 "ctrlr_loss_timeout_sec": 0, 00:17:35.508 "reconnect_delay_sec": 0, 00:17:35.508 "fast_io_fail_timeout_sec": 0, 00:17:35.509 "disable_auto_failback": false, 00:17:35.509 "generate_uuids": false, 00:17:35.509 "transport_tos": 0, 00:17:35.509 "nvme_error_stat": false, 00:17:35.509 "rdma_srq_size": 0, 00:17:35.509 "io_path_stat": false, 00:17:35.509 "allow_accel_sequence": false, 00:17:35.509 "rdma_max_cq_size": 0, 00:17:35.509 "rdma_cm_event_timeout_ms": 0, 00:17:35.509 "dhchap_digests": [ 00:17:35.509 "sha256", 00:17:35.509 "sha384", 00:17:35.509 "sha512" 00:17:35.509 ], 00:17:35.509 "dhchap_dhgroups": [ 00:17:35.509 "null", 00:17:35.509 "ffdhe2048", 00:17:35.509 "ffdhe3072", 00:17:35.509 "ffdhe4096", 00:17:35.509 "ffdhe6144", 00:17:35.509 "ffdhe8192" 00:17:35.509 ] 00:17:35.509 } 00:17:35.509 }, 00:17:35.509 { 00:17:35.509 "method": "bdev_nvme_set_hotplug", 00:17:35.509 "params": { 00:17:35.509 "period_us": 100000, 00:17:35.509 "enable": false 00:17:35.509 } 00:17:35.509 }, 00:17:35.509 { 00:17:35.509 "method": "bdev_malloc_create", 00:17:35.509 "params": { 00:17:35.509 "name": "malloc0", 00:17:35.509 "num_blocks": 8192, 00:17:35.509 "block_size": 4096, 00:17:35.509 "physical_block_size": 4096, 00:17:35.509 "uuid": "180a804e-f0c2-407d-af63-6d34523f9eb6", 00:17:35.509 "optimal_io_boundary": 0, 00:17:35.509 "md_size": 0, 00:17:35.509 "dif_type": 0, 00:17:35.509 "dif_is_head_of_md": false, 00:17:35.509 "dif_pi_format": 0 00:17:35.509 } 00:17:35.509 }, 00:17:35.509 { 00:17:35.509 "method": "bdev_wait_for_examine" 00:17:35.509 } 00:17:35.509 ] 00:17:35.509 }, 00:17:35.509 { 00:17:35.509 "subsystem": "nbd", 00:17:35.509 "config": [] 00:17:35.509 }, 00:17:35.509 { 00:17:35.509 "subsystem": "scheduler", 00:17:35.509 "config": [ 00:17:35.509 { 00:17:35.509 "method": "framework_set_scheduler", 00:17:35.509 "params": { 00:17:35.509 "name": "static" 00:17:35.509 } 00:17:35.509 } 00:17:35.509 ] 00:17:35.509 }, 00:17:35.509 { 00:17:35.509 "subsystem": "nvmf", 00:17:35.509 "config": [ 00:17:35.509 { 00:17:35.509 "method": "nvmf_set_config", 00:17:35.509 "params": { 00:17:35.509 "discovery_filter": "match_any", 00:17:35.509 "admin_cmd_passthru": { 00:17:35.509 "identify_ctrlr": false 00:17:35.509 }, 00:17:35.509 "dhchap_digests": [ 00:17:35.509 "sha256", 00:17:35.509 "sha384", 00:17:35.509 "sha512" 00:17:35.509 ], 00:17:35.509 "dhchap_dhgroups": [ 00:17:35.509 "null", 00:17:35.509 "ffdhe2048", 00:17:35.509 "ffdhe3072", 00:17:35.509 "ffdhe4096", 00:17:35.509 "ffdhe6144", 00:17:35.509 "ffdhe8192" 00:17:35.509 ] 00:17:35.509 } 00:17:35.509 }, 00:17:35.509 { 00:17:35.509 "method": "nvmf_set_max_subsystems", 00:17:35.509 "params": { 00:17:35.509 "max_subsystems": 1024 00:17:35.509 } 00:17:35.509 }, 00:17:35.509 { 00:17:35.509 "method": "nvmf_set_crdt", 00:17:35.509 "params": { 00:17:35.509 "crdt1": 0, 00:17:35.509 "crdt2": 0, 00:17:35.509 "crdt3": 0 00:17:35.509 } 00:17:35.509 }, 00:17:35.509 { 00:17:35.509 "method": "nvmf_create_transport", 00:17:35.509 "params": { 00:17:35.509 "trtype": "TCP", 00:17:35.509 "max_queue_depth": 128, 00:17:35.509 "max_io_qpairs_per_ctrlr": 127, 00:17:35.509 "in_capsule_data_size": 4096, 00:17:35.509 "max_io_size": 131072, 00:17:35.509 "io_unit_size": 131072, 00:17:35.509 "max_aq_depth": 128, 00:17:35.509 "num_shared_buffers": 511, 00:17:35.509 "buf_cache_size": 4294967295, 00:17:35.509 "dif_insert_or_strip": false, 00:17:35.509 "zcopy": false, 00:17:35.509 "c2h_success": false, 00:17:35.509 "sock_priority": 0, 00:17:35.509 "abort_timeout_sec": 1, 00:17:35.509 "ack_timeout": 0, 00:17:35.509 "data_wr_pool_size": 0 00:17:35.509 } 00:17:35.509 }, 00:17:35.509 { 00:17:35.509 "method": "nvmf_create_subsystem", 00:17:35.509 "params": { 00:17:35.509 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:35.509 "allow_any_host": false, 00:17:35.509 "serial_number": "00000000000000000000", 00:17:35.509 "model_number": "SPDK bdev Controller", 00:17:35.509 "max_namespaces": 32, 00:17:35.509 "min_cntlid": 1, 00:17:35.509 "max_cntlid": 65519, 00:17:35.509 "ana_reporting": false 00:17:35.509 } 00:17:35.509 }, 00:17:35.509 { 00:17:35.509 "method": "nvmf_subsystem_add_host", 00:17:35.509 "params": { 00:17:35.509 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:35.509 "host": "nqn.2016-06.io.spdk:host1", 00:17:35.509 "psk": "key0" 00:17:35.509 } 00:17:35.509 }, 00:17:35.509 { 00:17:35.509 "method": "nvmf_subsystem_add_ns", 00:17:35.509 "params": { 00:17:35.509 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:35.509 "namespace": { 00:17:35.509 "nsid": 1, 00:17:35.509 "bdev_name": "malloc0", 00:17:35.509 "nguid": "180A804EF0C2407DAF636D34523F9EB6", 00:17:35.509 "uuid": "180a804e-f0c2-407d-af63-6d34523f9eb6", 00:17:35.509 "no_auto_visible": false 00:17:35.509 } 00:17:35.509 } 00:17:35.509 }, 00:17:35.509 { 00:17:35.509 "method": "nvmf_subsystem_add_listener", 00:17:35.509 "params": { 00:17:35.509 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:35.509 "listen_address": { 00:17:35.509 "trtype": "TCP", 00:17:35.509 "adrfam": "IPv4", 00:17:35.509 "traddr": "10.0.0.3", 00:17:35.509 "trsvcid": "4420" 00:17:35.509 }, 00:17:35.509 "secure_channel": false, 00:17:35.509 "sock_impl": "ssl" 00:17:35.509 } 00:17:35.509 } 00:17:35.509 ] 00:17:35.509 } 00:17:35.509 ] 00:17:35.509 }' 00:17:35.509 00:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:17:35.769 00:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:17:35.769 "subsystems": [ 00:17:35.769 { 00:17:35.769 "subsystem": "keyring", 00:17:35.769 "config": [ 00:17:35.769 { 00:17:35.769 "method": "keyring_file_add_key", 00:17:35.769 "params": { 00:17:35.769 "name": "key0", 00:17:35.769 "path": "/tmp/tmp.NefXfLmuNW" 00:17:35.769 } 00:17:35.769 } 00:17:35.769 ] 00:17:35.769 }, 00:17:35.769 { 00:17:35.769 "subsystem": "iobuf", 00:17:35.769 "config": [ 00:17:35.769 { 00:17:35.769 "method": "iobuf_set_options", 00:17:35.769 "params": { 00:17:35.769 "small_pool_count": 8192, 00:17:35.769 "large_pool_count": 1024, 00:17:35.769 "small_bufsize": 8192, 00:17:35.769 "large_bufsize": 135168, 00:17:35.769 "enable_numa": false 00:17:35.769 } 00:17:35.769 } 00:17:35.769 ] 00:17:35.769 }, 00:17:35.769 { 00:17:35.769 "subsystem": "sock", 00:17:35.769 "config": [ 00:17:35.769 { 00:17:35.769 "method": "sock_set_default_impl", 00:17:35.769 "params": { 00:17:35.769 "impl_name": "uring" 00:17:35.769 } 00:17:35.769 }, 00:17:35.770 { 00:17:35.770 "method": "sock_impl_set_options", 00:17:35.770 "params": { 00:17:35.770 "impl_name": "ssl", 00:17:35.770 "recv_buf_size": 4096, 00:17:35.770 "send_buf_size": 4096, 00:17:35.770 "enable_recv_pipe": true, 00:17:35.770 "enable_quickack": false, 00:17:35.770 "enable_placement_id": 0, 00:17:35.770 "enable_zerocopy_send_server": true, 00:17:35.770 "enable_zerocopy_send_client": false, 00:17:35.770 "zerocopy_threshold": 0, 00:17:35.770 "tls_version": 0, 00:17:35.770 "enable_ktls": false 00:17:35.770 } 00:17:35.770 }, 00:17:35.770 { 00:17:35.770 "method": "sock_impl_set_options", 00:17:35.770 "params": { 00:17:35.770 "impl_name": "posix", 00:17:35.770 "recv_buf_size": 2097152, 00:17:35.770 "send_buf_size": 2097152, 00:17:35.770 "enable_recv_pipe": true, 00:17:35.770 "enable_quickack": false, 00:17:35.770 "enable_placement_id": 0, 00:17:35.770 "enable_zerocopy_send_server": true, 00:17:35.770 "enable_zerocopy_send_client": false, 00:17:35.770 "zerocopy_threshold": 0, 00:17:35.770 "tls_version": 0, 00:17:35.770 "enable_ktls": false 00:17:35.770 } 00:17:35.770 }, 00:17:35.770 { 00:17:35.770 "method": "sock_impl_set_options", 00:17:35.770 "params": { 00:17:35.770 "impl_name": "uring", 00:17:35.770 "recv_buf_size": 2097152, 00:17:35.770 "send_buf_size": 2097152, 00:17:35.770 "enable_recv_pipe": true, 00:17:35.770 "enable_quickack": false, 00:17:35.770 "enable_placement_id": 0, 00:17:35.770 "enable_zerocopy_send_server": false, 00:17:35.770 "enable_zerocopy_send_client": false, 00:17:35.770 "zerocopy_threshold": 0, 00:17:35.770 "tls_version": 0, 00:17:35.770 "enable_ktls": false 00:17:35.770 } 00:17:35.770 } 00:17:35.770 ] 00:17:35.770 }, 00:17:35.770 { 00:17:35.770 "subsystem": "vmd", 00:17:35.770 "config": [] 00:17:35.770 }, 00:17:35.770 { 00:17:35.770 "subsystem": "accel", 00:17:35.770 "config": [ 00:17:35.770 { 00:17:35.770 "method": "accel_set_options", 00:17:35.770 "params": { 00:17:35.770 "small_cache_size": 128, 00:17:35.770 "large_cache_size": 16, 00:17:35.770 "task_count": 2048, 00:17:35.770 "sequence_count": 2048, 00:17:35.770 "buf_count": 2048 00:17:35.770 } 00:17:35.770 } 00:17:35.770 ] 00:17:35.770 }, 00:17:35.770 { 00:17:35.770 "subsystem": "bdev", 00:17:35.770 "config": [ 00:17:35.770 { 00:17:35.770 "method": "bdev_set_options", 00:17:35.770 "params": { 00:17:35.770 "bdev_io_pool_size": 65535, 00:17:35.770 "bdev_io_cache_size": 256, 00:17:35.770 "bdev_auto_examine": true, 00:17:35.770 "iobuf_small_cache_size": 128, 00:17:35.770 "iobuf_large_cache_size": 16 00:17:35.770 } 00:17:35.770 }, 00:17:35.770 { 00:17:35.770 "method": "bdev_raid_set_options", 00:17:35.770 "params": { 00:17:35.770 "process_window_size_kb": 1024, 00:17:35.770 "process_max_bandwidth_mb_sec": 0 00:17:35.770 } 00:17:35.770 }, 00:17:35.770 { 00:17:35.770 "method": "bdev_iscsi_set_options", 00:17:35.770 "params": { 00:17:35.770 "timeout_sec": 30 00:17:35.770 } 00:17:35.770 }, 00:17:35.770 { 00:17:35.770 "method": "bdev_nvme_set_options", 00:17:35.770 "params": { 00:17:35.770 "action_on_timeout": "none", 00:17:35.770 "timeout_us": 0, 00:17:35.770 "timeout_admin_us": 0, 00:17:35.770 "keep_alive_timeout_ms": 10000, 00:17:35.770 "arbitration_burst": 0, 00:17:35.770 "low_priority_weight": 0, 00:17:35.770 "medium_priority_weight": 0, 00:17:35.770 "high_priority_weight": 0, 00:17:35.770 "nvme_adminq_poll_period_us": 10000, 00:17:35.770 "nvme_ioq_poll_period_us": 0, 00:17:35.770 "io_queue_requests": 512, 00:17:35.770 "delay_cmd_submit": true, 00:17:35.770 "transport_retry_count": 4, 00:17:35.770 "bdev_retry_count": 3, 00:17:35.770 "transport_ack_timeout": 0, 00:17:35.770 "ctrlr_loss_timeout_sec": 0, 00:17:35.770 "reconnect_delay_sec": 0, 00:17:35.770 "fast_io_fail_timeout_sec": 0, 00:17:35.770 "disable_auto_failback": false, 00:17:35.770 "generate_uuids": false, 00:17:35.770 "transport_tos": 0, 00:17:35.770 "nvme_error_stat": false, 00:17:35.770 "rdma_srq_size": 0, 00:17:35.770 "io_path_stat": false, 00:17:35.770 "allow_accel_sequence": false, 00:17:35.770 "rdma_max_cq_size": 0, 00:17:35.770 "rdma_cm_event_timeout_ms": 0, 00:17:35.770 "dhchap_digests": [ 00:17:35.770 "sha256", 00:17:35.770 "sha384", 00:17:35.770 "sha512" 00:17:35.770 ], 00:17:35.770 "dhchap_dhgroups": [ 00:17:35.770 "null", 00:17:35.770 "ffdhe2048", 00:17:35.770 "ffdhe3072", 00:17:35.770 "ffdhe4096", 00:17:35.770 "ffdhe6144", 00:17:35.770 "ffdhe8192" 00:17:35.770 ] 00:17:35.770 } 00:17:35.770 }, 00:17:35.770 { 00:17:35.770 "method": "bdev_nvme_attach_controller", 00:17:35.770 "params": { 00:17:35.770 "name": "nvme0", 00:17:35.770 "trtype": "TCP", 00:17:35.770 "adrfam": "IPv4", 00:17:35.770 "traddr": "10.0.0.3", 00:17:35.770 "trsvcid": "4420", 00:17:35.770 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:35.770 "prchk_reftag": false, 00:17:35.770 "prchk_guard": false, 00:17:35.770 "ctrlr_loss_timeout_sec": 0, 00:17:35.770 "reconnect_delay_sec": 0, 00:17:35.770 "fast_io_fail_timeout_sec": 0, 00:17:35.770 "psk": "key0", 00:17:35.770 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:35.770 "hdgst": false, 00:17:35.770 "ddgst": false, 00:17:35.770 "multipath": "multipath" 00:17:35.770 } 00:17:35.770 }, 00:17:35.770 { 00:17:35.771 "method": "bdev_nvme_set_hotplug", 00:17:35.771 "params": { 00:17:35.771 "period_us": 100000, 00:17:35.771 "enable": false 00:17:35.771 } 00:17:35.771 }, 00:17:35.771 { 00:17:35.771 "method": "bdev_enable_histogram", 00:17:35.771 "params": { 00:17:35.771 "name": "nvme0n1", 00:17:35.771 "enable": true 00:17:35.771 } 00:17:35.771 }, 00:17:35.771 { 00:17:35.771 "method": "bdev_wait_for_examine" 00:17:35.771 } 00:17:35.771 ] 00:17:35.771 }, 00:17:35.771 { 00:17:35.771 "subsystem": "nbd", 00:17:35.771 "config": [] 00:17:35.771 } 00:17:35.771 ] 00:17:35.771 }' 00:17:35.771 00:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 75167 00:17:35.771 00:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 75167 ']' 00:17:35.771 00:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 75167 00:17:35.771 00:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:35.771 00:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:35.771 00:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75167 00:17:35.771 killing process with pid 75167 00:17:35.771 Received shutdown signal, test time was about 1.000000 seconds 00:17:35.771 00:17:35.771 Latency(us) 00:17:35.771 [2024-11-19T00:01:42.463Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:35.771 [2024-11-19T00:01:42.463Z] =================================================================================================================== 00:17:35.771 [2024-11-19T00:01:42.463Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:35.771 00:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:35.771 00:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:35.771 00:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75167' 00:17:35.771 00:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 75167 00:17:35.771 00:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 75167 00:17:36.708 00:01:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 75135 00:17:36.708 00:01:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 75135 ']' 00:17:36.708 00:01:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 75135 00:17:36.708 00:01:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:36.708 00:01:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:36.709 00:01:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75135 00:17:36.709 killing process with pid 75135 00:17:36.709 00:01:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:36.709 00:01:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:36.709 00:01:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75135' 00:17:36.709 00:01:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 75135 00:17:36.709 00:01:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 75135 00:17:37.646 00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:17:37.646 00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:37.646 00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:37.646 00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:17:37.646 "subsystems": [ 00:17:37.646 { 00:17:37.646 "subsystem": "keyring", 00:17:37.646 "config": [ 00:17:37.646 { 00:17:37.646 "method": "keyring_file_add_key", 00:17:37.646 "params": { 00:17:37.646 "name": "key0", 00:17:37.646 "path": "/tmp/tmp.NefXfLmuNW" 00:17:37.646 } 00:17:37.646 } 00:17:37.646 ] 00:17:37.646 }, 00:17:37.646 { 00:17:37.646 "subsystem": "iobuf", 00:17:37.646 "config": [ 00:17:37.646 { 00:17:37.646 "method": "iobuf_set_options", 00:17:37.646 "params": { 00:17:37.646 "small_pool_count": 8192, 00:17:37.646 "large_pool_count": 1024, 00:17:37.646 "small_bufsize": 8192, 00:17:37.646 "large_bufsize": 135168, 00:17:37.646 "enable_numa": false 00:17:37.646 } 00:17:37.646 } 00:17:37.646 ] 00:17:37.646 }, 00:17:37.646 { 00:17:37.646 "subsystem": "sock", 00:17:37.646 "config": [ 00:17:37.646 { 00:17:37.646 "method": "sock_set_default_impl", 00:17:37.646 "params": { 00:17:37.646 "impl_name": "uring" 00:17:37.646 } 00:17:37.646 }, 00:17:37.646 { 00:17:37.646 "method": "sock_impl_set_options", 00:17:37.646 "params": { 00:17:37.646 "impl_name": "ssl", 00:17:37.646 "recv_buf_size": 4096, 00:17:37.646 "send_buf_size": 4096, 00:17:37.646 "enable_recv_pipe": true, 00:17:37.646 "enable_quickack": false, 00:17:37.646 "enable_placement_id": 0, 00:17:37.646 "enable_zerocopy_send_server": true, 00:17:37.646 "enable_zerocopy_send_client": false, 00:17:37.646 "zerocopy_threshold": 0, 00:17:37.646 "tls_version": 0, 00:17:37.646 "enable_ktls": false 00:17:37.646 } 00:17:37.646 }, 00:17:37.646 { 00:17:37.646 "method": "sock_impl_set_options", 00:17:37.646 "params": { 00:17:37.646 "impl_name": "posix", 00:17:37.646 "recv_buf_size": 2097152, 00:17:37.646 "send_buf_size": 2097152, 00:17:37.646 "enable_recv_pipe": true, 00:17:37.646 "enable_quickack": false, 00:17:37.646 "enable_placement_id": 0, 00:17:37.646 "enable_zerocopy_send_server": true, 00:17:37.646 "enable_zerocopy_send_client": false, 00:17:37.646 "zerocopy_threshold": 0, 00:17:37.646 "tls_version": 0, 00:17:37.646 "enable_ktls": false 00:17:37.646 } 00:17:37.646 }, 00:17:37.646 { 00:17:37.646 "method": "sock_impl_set_options", 00:17:37.646 "params": { 00:17:37.646 "impl_name": "uring", 00:17:37.646 "recv_buf_size": 2097152, 00:17:37.646 "send_buf_size": 2097152, 00:17:37.646 "enable_recv_pipe": true, 00:17:37.646 "enable_quickack": false, 00:17:37.646 "enable_placement_id": 0, 00:17:37.646 "enable_zerocopy_send_server": false, 00:17:37.646 "enable_zerocopy_send_client": false, 00:17:37.646 "zerocopy_threshold": 0, 00:17:37.646 "tls_version": 0, 00:17:37.646 "enable_ktls": false 00:17:37.646 } 00:17:37.646 } 00:17:37.646 ] 00:17:37.646 }, 00:17:37.646 { 00:17:37.646 "subsystem": "vmd", 00:17:37.646 "config": [] 00:17:37.646 }, 00:17:37.646 { 00:17:37.646 "subsystem": "accel", 00:17:37.646 "config": [ 00:17:37.646 { 00:17:37.646 "method": "accel_set_options", 00:17:37.646 "params": { 00:17:37.646 "small_cache_size": 128, 00:17:37.646 "large_cache_size": 16, 00:17:37.646 "task_count": 2048, 00:17:37.646 "sequence_count": 2048, 00:17:37.646 "buf_count": 2048 00:17:37.646 } 00:17:37.646 } 00:17:37.646 ] 00:17:37.646 }, 00:17:37.646 { 00:17:37.646 "subsystem": "bdev", 00:17:37.646 "config": [ 00:17:37.646 { 00:17:37.646 "method": "bdev_set_options", 00:17:37.646 "params": { 00:17:37.646 "bdev_io_pool_size": 65535, 00:17:37.646 "bdev_io_cache_size": 256, 00:17:37.646 "bdev_auto_examine": true, 00:17:37.646 "iobuf_small_cache_size": 128, 00:17:37.646 "iobuf_large_cache_size": 16 00:17:37.646 } 00:17:37.646 }, 00:17:37.646 { 00:17:37.646 "method": "bdev_raid_set_options", 00:17:37.646 "params": { 00:17:37.646 "process_window_size_kb": 1024, 00:17:37.646 "process_max_bandwidth_mb_sec": 0 00:17:37.646 } 00:17:37.646 }, 00:17:37.646 { 00:17:37.646 "method": "bdev_iscsi_set_options", 00:17:37.646 "params": { 00:17:37.646 "timeout_sec": 30 00:17:37.646 } 00:17:37.646 }, 00:17:37.646 { 00:17:37.646 "method": "bdev_nvme_set_options", 00:17:37.646 "params": { 00:17:37.646 "action_on_timeout": "none", 00:17:37.646 "timeout_us": 0, 00:17:37.646 "timeout_admin_us": 0, 00:17:37.646 "keep_alive_timeout_ms": 10000, 00:17:37.646 "arbitration_burst": 0, 00:17:37.646 "low_priority_weight": 0, 00:17:37.646 "medium_priority_weight": 0, 00:17:37.646 "high_priority_weight": 0, 00:17:37.646 "nvme_adminq_poll_period_us": 10000, 00:17:37.646 "nvme_ioq_poll_period_us": 0, 00:17:37.646 "io_queue_requests": 0, 00:17:37.646 "delay_cmd_submit": true, 00:17:37.646 "transport_retry_count": 4, 00:17:37.646 "bdev_retry_count": 3, 00:17:37.646 "transport_ack_timeout": 0, 00:17:37.646 "ctrlr_loss_timeout_sec": 0, 00:17:37.646 "reconnect_delay_sec": 0, 00:17:37.646 "fast_io_fail_timeout_sec": 0, 00:17:37.646 "disable_auto_failback": false, 00:17:37.646 "generate_uuids": false, 00:17:37.646 "transport_tos": 0, 00:17:37.646 "nvme_error_stat": false, 00:17:37.646 "rdma_srq_size": 0, 00:17:37.646 "io_path_stat": false, 00:17:37.646 "allow_accel_sequence": false, 00:17:37.646 "rdma_max_cq_size": 0, 00:17:37.646 "rdma_cm_event_timeout_ms": 0, 00:17:37.646 "dhchap_digests": [ 00:17:37.646 "sha256", 00:17:37.646 "sha384", 00:17:37.646 "sha512" 00:17:37.646 ], 00:17:37.646 "dhchap_dhgroups": [ 00:17:37.646 "null", 00:17:37.646 "ffdhe2048", 00:17:37.646 "ffdhe3072", 00:17:37.646 "ffdhe4096", 00:17:37.646 "ffdhe6144", 00:17:37.646 "ffdhe8192" 00:17:37.646 ] 00:17:37.646 } 00:17:37.646 }, 00:17:37.646 { 00:17:37.646 "method": "bdev_nvme_set_hotplug", 00:17:37.646 "params": { 00:17:37.646 "period_us": 100000, 00:17:37.646 "enable": false 00:17:37.646 } 00:17:37.646 }, 00:17:37.646 { 00:17:37.646 "method": "bdev_malloc_create", 00:17:37.646 "params": { 00:17:37.646 "name": "malloc0", 00:17:37.646 "num_blocks": 8192, 00:17:37.646 "block_size": 4096, 00:17:37.646 "physical_block_size": 4096, 00:17:37.646 "uuid": "180a804e-f0c2-407d-af63-6d34523f9eb6", 00:17:37.646 "optimal_io_boundary": 0, 00:17:37.646 "md_size": 0, 00:17:37.646 "dif_type": 0, 00:17:37.646 "dif_is_head_of_md": false, 00:17:37.646 "dif_pi_format": 0 00:17:37.646 } 00:17:37.646 }, 00:17:37.646 { 00:17:37.646 "method": "bdev_wait_for_examine" 00:17:37.646 } 00:17:37.646 ] 00:17:37.646 }, 00:17:37.646 { 00:17:37.646 "subsystem": "nbd", 00:17:37.646 "config": [] 00:17:37.646 }, 00:17:37.646 { 00:17:37.646 "subsystem": "scheduler", 00:17:37.646 "config": [ 00:17:37.646 { 00:17:37.646 "method": "framework_set_scheduler", 00:17:37.646 "params": { 00:17:37.646 "name": "static" 00:17:37.646 } 00:17:37.646 } 00:17:37.646 ] 00:17:37.646 }, 00:17:37.646 { 00:17:37.646 "subsystem": "nvmf", 00:17:37.646 "config": [ 00:17:37.646 { 00:17:37.646 "method": "nvmf_set_config", 00:17:37.646 "params": { 00:17:37.646 "discovery_filter": "match_any", 00:17:37.646 "admin_cmd_passthru": { 00:17:37.646 "identify_ctrlr": false 00:17:37.646 }, 00:17:37.646 "dhchap_digests": [ 00:17:37.646 "sha256", 00:17:37.646 "sha384", 00:17:37.646 "sha512" 00:17:37.646 ], 00:17:37.646 "dhchap_dhgroups": [ 00:17:37.646 "null", 00:17:37.646 "ffdhe2048", 00:17:37.646 "ffdhe3072", 00:17:37.646 "ffdhe4096", 00:17:37.646 "ffdhe6144", 00:17:37.646 "ffdhe8192" 00:17:37.646 ] 00:17:37.646 } 00:17:37.646 }, 00:17:37.646 { 00:17:37.646 "method": "nvmf_set_max_subsystems", 00:17:37.646 "params": { 00:17:37.646 "max_subsystems": 1024 00:17:37.646 } 00:17:37.646 }, 00:17:37.646 { 00:17:37.646 "method": "nvmf_set_crdt", 00:17:37.646 "params": { 00:17:37.646 "crdt1": 0, 00:17:37.646 "crdt2": 0, 00:17:37.646 "crdt3": 0 00:17:37.646 } 00:17:37.646 }, 00:17:37.646 { 00:17:37.646 "method": "nvmf_create_transport", 00:17:37.646 "params": { 00:17:37.647 "trtype": "TCP", 00:17:37.647 "max_queue_depth": 128, 00:17:37.647 "max_io_qpairs_per_ctrlr": 127, 00:17:37.647 "in_capsule_data_size": 4096, 00:17:37.647 "max_io_size": 131072, 00:17:37.647 "io_unit_size": 131072, 00:17:37.647 "max_aq_depth": 128, 00:17:37.647 "num_shared_buffers": 511, 00:17:37.647 "buf_cache_size": 4294967295, 00:17:37.647 "dif_insert_or_strip": false, 00:17:37.647 "zcopy": false, 00:17:37.647 "c2h_success": false, 00:17:37.647 "sock_priority": 0, 00:17:37.647 "abort_timeout_sec": 1, 00:17:37.647 "ack_timeout": 0, 00:17:37.647 "data_wr_pool_size": 0 00:17:37.647 } 00:17:37.647 }, 00:17:37.647 { 00:17:37.647 "method": "nvmf_create_subsystem", 00:17:37.647 "params": { 00:17:37.647 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:37.647 "allow_any_host": false, 00:17:37.647 "serial_number": "00000000000000000000", 00:17:37.647 "model_number": "SPDK bdev Controller", 00:17:37.647 "max_namespaces": 32, 00:17:37.647 "min_cntlid": 1, 00:17:37.647 "max_cntlid": 65519, 00:17:37.647 "ana_reporting": false 00:17:37.647 } 00:17:37.647 }, 00:17:37.647 { 00:17:37.647 "method": "nvmf_subsystem_add_host", 00:17:37.647 "params": { 00:17:37.647 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:37.647 "host": "nqn.2016-06.io.spdk:host1", 00:17:37.647 "psk": "key0" 00:17:37.647 } 00:17:37.647 }, 00:17:37.647 { 00:17:37.647 "method": "nvmf_subsystem_add_ns", 00:17:37.647 "params": { 00:17:37.647 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:37.647 "namespace": { 00:17:37.647 "nsid": 1, 00:17:37.647 "bdev_name": "malloc0", 00:17:37.647 "nguid": "180A804EF0C2407DAF636D34523F9EB6", 00:17:37.647 "uuid": "180a804e-f0c2-407d-af63-6d34523f9eb6", 00:17:37.647 "no_auto_visible": false 00:17:37.647 } 00:17:37.647 } 00:17:37.647 }, 00:17:37.647 { 00:17:37.647 "method": "nvmf_subsystem_add_listener", 00:17:37.647 "params": { 00:17:37.647 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:37.647 "listen_address": { 00:17:37.647 "trtype": "TCP", 00:17:37.647 "adrfam": "IPv4", 00:17:37.647 "traddr": "10.0.0.3", 00:17:37.647 "trsvcid": "4420" 00:17:37.647 }, 00:17:37.647 "secure_channel": false, 00:17:37.647 "sock_impl": "ssl" 00:17:37.647 } 00:17:37.647 } 00:17:37.647 ] 00:17:37.647 } 00:17:37.647 ] 00:17:37.647 }' 00:17:37.647 00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:37.647 00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=75241 00:17:37.647 00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:17:37.647 00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 75241 00:17:37.647 00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 75241 ']' 00:17:37.647 00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:37.647 00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:37.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:37.647 00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:37.647 00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:37.647 00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:37.906 [2024-11-19 00:01:44.345836] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:17:37.906 [2024-11-19 00:01:44.346022] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:37.906 [2024-11-19 00:01:44.523216] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:38.165 [2024-11-19 00:01:44.616996] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:38.165 [2024-11-19 00:01:44.617074] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:38.166 [2024-11-19 00:01:44.617113] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:38.166 [2024-11-19 00:01:44.617137] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:38.166 [2024-11-19 00:01:44.617152] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:38.166 [2024-11-19 00:01:44.618360] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:38.424 [2024-11-19 00:01:44.908683] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:38.425 [2024-11-19 00:01:45.065183] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:38.425 [2024-11-19 00:01:45.097156] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:38.425 [2024-11-19 00:01:45.097476] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:38.684 00:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:38.684 00:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:38.684 00:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:38.684 00:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:38.684 00:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:38.684 00:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:38.684 00:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=75276 00:17:38.684 00:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 75276 /var/tmp/bdevperf.sock 00:17:38.684 00:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 75276 ']' 00:17:38.684 00:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:38.684 00:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:17:38.684 00:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:38.684 00:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:17:38.684 "subsystems": [ 00:17:38.684 { 00:17:38.684 "subsystem": "keyring", 00:17:38.684 "config": [ 00:17:38.684 { 00:17:38.684 "method": "keyring_file_add_key", 00:17:38.684 "params": { 00:17:38.684 "name": "key0", 00:17:38.684 "path": "/tmp/tmp.NefXfLmuNW" 00:17:38.684 } 00:17:38.684 } 00:17:38.684 ] 00:17:38.684 }, 00:17:38.684 { 00:17:38.684 "subsystem": "iobuf", 00:17:38.684 "config": [ 00:17:38.684 { 00:17:38.684 "method": "iobuf_set_options", 00:17:38.684 "params": { 00:17:38.684 "small_pool_count": 8192, 00:17:38.684 "large_pool_count": 1024, 00:17:38.684 "small_bufsize": 8192, 00:17:38.684 "large_bufsize": 135168, 00:17:38.684 "enable_numa": false 00:17:38.684 } 00:17:38.684 } 00:17:38.684 ] 00:17:38.684 }, 00:17:38.684 { 00:17:38.684 "subsystem": "sock", 00:17:38.684 "config": [ 00:17:38.684 { 00:17:38.684 "method": "sock_set_default_impl", 00:17:38.684 "params": { 00:17:38.684 "impl_name": "uring" 00:17:38.684 } 00:17:38.684 }, 00:17:38.684 { 00:17:38.684 "method": "sock_impl_set_options", 00:17:38.684 "params": { 00:17:38.684 "impl_name": "ssl", 00:17:38.684 "recv_buf_size": 4096, 00:17:38.684 "send_buf_size": 4096, 00:17:38.685 "enable_recv_pipe": true, 00:17:38.685 "enable_quickack": false, 00:17:38.685 "enable_placement_id": 0, 00:17:38.685 "enable_zerocopy_send_server": true, 00:17:38.685 "enable_zerocopy_send_client": false, 00:17:38.685 "zerocopy_threshold": 0, 00:17:38.685 "tls_version": 0, 00:17:38.685 "enable_ktls": false 00:17:38.685 } 00:17:38.685 }, 00:17:38.685 { 00:17:38.685 "method": "sock_impl_set_options", 00:17:38.685 "params": { 00:17:38.685 "impl_name": "posix", 00:17:38.685 "recv_buf_size": 2097152, 00:17:38.685 "send_buf_size": 2097152, 00:17:38.685 "enable_recv_pipe": true, 00:17:38.685 "enable_quickack": false, 00:17:38.685 "enable_placement_id": 0, 00:17:38.685 "enable_zerocopy_send_server": true, 00:17:38.685 "enable_zerocopy_send_client": false, 00:17:38.685 "zerocopy_threshold": 0, 00:17:38.685 "tls_version": 0, 00:17:38.685 "enable_ktls": false 00:17:38.685 } 00:17:38.685 }, 00:17:38.685 { 00:17:38.685 "method": "sock_impl_set_options", 00:17:38.685 "params": { 00:17:38.685 "impl_name": "uring", 00:17:38.685 "recv_buf_size": 2097152, 00:17:38.685 "send_buf_size": 2097152, 00:17:38.685 "enable_recv_pipe": true, 00:17:38.685 "enable_quickack": false, 00:17:38.685 "enable_placement_id": 0, 00:17:38.685 "enable_zerocopy_send_server": false, 00:17:38.685 "enable_zerocopy_send_client": false, 00:17:38.685 "zerocopy_threshold": 0, 00:17:38.685 "tls_version": 0, 00:17:38.685 "enable_ktls": false 00:17:38.685 } 00:17:38.685 } 00:17:38.685 ] 00:17:38.685 }, 00:17:38.685 { 00:17:38.685 "subsystem": "vmd", 00:17:38.685 "config": [] 00:17:38.685 }, 00:17:38.685 { 00:17:38.685 "subsystem": "accel", 00:17:38.685 "config": [ 00:17:38.685 { 00:17:38.685 "method": "accel_set_options", 00:17:38.685 "params": { 00:17:38.685 "small_cache_size": 128, 00:17:38.685 "large_cache_size": 16, 00:17:38.685 "task_count": 2048, 00:17:38.685 "sequence_count": 2048, 00:17:38.685 "buf_count": 2048 00:17:38.685 } 00:17:38.685 } 00:17:38.685 ] 00:17:38.685 }, 00:17:38.685 { 00:17:38.685 "subsystem": "bdev", 00:17:38.685 "config": [ 00:17:38.685 { 00:17:38.685 "method": "bdev_set_options", 00:17:38.685 "params": { 00:17:38.685 "bdev_io_pool_size": 65535, 00:17:38.685 "bdev_io_cache_size": 256, 00:17:38.685 "bdev_auto_examine": true, 00:17:38.685 "iobuf_small_cache_size": 128, 00:17:38.685 "iobuf_large_cache_size": 16 00:17:38.685 } 00:17:38.685 }, 00:17:38.685 { 00:17:38.685 "method": "bdev_raid_set_options", 00:17:38.685 "params": { 00:17:38.685 "process_window_size_kb": 1024, 00:17:38.685 "process_max_bandwidth_mb_sec": 0 00:17:38.685 } 00:17:38.685 }, 00:17:38.685 { 00:17:38.685 "method": "bdev_iscsi_set_options", 00:17:38.685 "params": { 00:17:38.685 "timeout_sec": 30 00:17:38.685 } 00:17:38.685 }, 00:17:38.685 { 00:17:38.685 "method": "bdev_nvme_set_options", 00:17:38.685 "params": { 00:17:38.685 "action_on_timeout": "none", 00:17:38.685 "timeout_us": 0, 00:17:38.685 "timeout_admin_us": 0, 00:17:38.685 "keep_alive_timeout_ms": 10000, 00:17:38.685 "arbitration_burst": 0, 00:17:38.685 "low_priority_weight": 0, 00:17:38.685 "medium_priority_weight": 0, 00:17:38.685 "high_priority_weight": 0, 00:17:38.685 "nvme_adminq_poll_period_us": 10000, 00:17:38.685 "nvme_ioq_poll_period_us": 0, 00:17:38.685 "io_queue_requests": 512, 00:17:38.685 "delay_cmd_submit": true, 00:17:38.685 "transport_retry_count": 4, 00:17:38.685 "bdev_retry_count": 3, 00:17:38.685 "transport_ack_timeout": 0, 00:17:38.685 "ctrlr_loss_timeout_sec": 0, 00:17:38.685 "reconnect_delay_sec": 0, 00:17:38.685 "fast_io_fail_timeout_sec": 0, 00:17:38.685 "disable_auto_failback": false, 00:17:38.685 "generate_uuids": false, 00:17:38.685 "transport_tos": 0, 00:17:38.685 "nvme_error_stat": false, 00:17:38.685 "rdma_srq_size": 0, 00:17:38.685 "io_path_stat": false, 00:17:38.685 "allow_accel_sequence": false, 00:17:38.685 "rdma_max_cq_size": 0, 00:17:38.685 "rdma_cm_event_timeout_ms": 0, 00:17:38.685 "dhchap_digests": [ 00:17:38.685 "sha256", 00:17:38.685 "sha384", 00:17:38.685 "sha512" 00:17:38.685 ], 00:17:38.685 "dhchap_dhgroups": [ 00:17:38.685 "null", 00:17:38.685 "ffdhe2048", 00:17:38.685 "ffdhe3072", 00:17:38.685 "ffdhe4096", 00:17:38.685 "ffdhe6144", 00:17:38.685 "ffdhe8192" 00:17:38.685 ] 00:17:38.685 } 00:17:38.685 }, 00:17:38.685 { 00:17:38.685 "method": "bdev_nvme_attach_controller", 00:17:38.685 "params": { 00:17:38.685 "name": "nvme0", 00:17:38.685 "trtype": "TCP", 00:17:38.685 "adrfam": "IPv4", 00:17:38.685 "traddr": "10.0.0.3", 00:17:38.685 "trsvcid": "4420", 00:17:38.685 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:38.685 "prchk_reftag": false, 00:17:38.685 "prchk_guard": false, 00:17:38.685 "ctrlr_loss_timeout_sec": 0, 00:17:38.685 "reconnect_delay_sec": 0, 00:17:38.685 "fast_io_fail_timeout_sec": 0, 00:17:38.685 "psk": "key0", 00:17:38.685 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:38.685 "hdgst": false, 00:17:38.685 "ddgst": false, 00:17:38.685 "multipath": "multipath" 00:17:38.685 } 00:17:38.685 }, 00:17:38.685 { 00:17:38.685 "method": "bdev_nvme_set_hotplug", 00:17:38.685 "params": { 00:17:38.685 "period_us": 100000, 00:17:38.685 "enable": false 00:17:38.685 } 00:17:38.685 }, 00:17:38.685 { 00:17:38.685 "method": "bdev_enable_histogram", 00:17:38.685 "params": { 00:17:38.685 "name": "nvme0n1", 00:17:38.685 "enable": true 00:17:38.685 } 00:17:38.685 }, 00:17:38.685 { 00:17:38.685 "method": "bdev_wait_for_examine" 00:17:38.685 } 00:17:38.685 ] 00:17:38.685 }, 00:17:38.685 { 00:17:38.685 "subsystem": "nbd", 00:17:38.685 "config": [] 00:17:38.685 } 00:17:38.685 ] 00:17:38.685 }' 00:17:38.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:38.685 00:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:38.685 00:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:38.685 00:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:38.945 [2024-11-19 00:01:45.463719] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:17:38.945 [2024-11-19 00:01:45.464368] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75276 ] 00:17:39.204 [2024-11-19 00:01:45.642702] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:39.204 [2024-11-19 00:01:45.772150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:39.463 [2024-11-19 00:01:46.031289] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:39.463 [2024-11-19 00:01:46.137822] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:40.031 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:40.031 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:40.031 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:40.031 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:17:40.031 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:40.031 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:40.313 Running I/O for 1 seconds... 00:17:41.258 2891.00 IOPS, 11.29 MiB/s 00:17:41.258 Latency(us) 00:17:41.258 [2024-11-19T00:01:47.950Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:41.258 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:41.258 Verification LBA range: start 0x0 length 0x2000 00:17:41.258 nvme0n1 : 1.04 2916.61 11.39 0.00 0.00 43093.24 8638.84 27405.96 00:17:41.258 [2024-11-19T00:01:47.950Z] =================================================================================================================== 00:17:41.258 [2024-11-19T00:01:47.950Z] Total : 2916.61 11.39 0.00 0.00 43093.24 8638.84 27405.96 00:17:41.258 { 00:17:41.258 "results": [ 00:17:41.258 { 00:17:41.258 "job": "nvme0n1", 00:17:41.258 "core_mask": "0x2", 00:17:41.258 "workload": "verify", 00:17:41.258 "status": "finished", 00:17:41.258 "verify_range": { 00:17:41.258 "start": 0, 00:17:41.258 "length": 8192 00:17:41.258 }, 00:17:41.258 "queue_depth": 128, 00:17:41.258 "io_size": 4096, 00:17:41.258 "runtime": 1.035106, 00:17:41.258 "iops": 2916.6095066592216, 00:17:41.258 "mibps": 11.393005885387584, 00:17:41.258 "io_failed": 0, 00:17:41.258 "io_timeout": 0, 00:17:41.258 "avg_latency_us": 43093.240314372604, 00:17:41.258 "min_latency_us": 8638.836363636363, 00:17:41.258 "max_latency_us": 27405.963636363635 00:17:41.258 } 00:17:41.258 ], 00:17:41.258 "core_count": 1 00:17:41.258 } 00:17:41.258 00:01:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:17:41.258 00:01:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:17:41.258 00:01:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:17:41.258 00:01:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:17:41.258 00:01:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:17:41.258 00:01:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:17:41.258 00:01:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:41.258 00:01:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:17:41.258 00:01:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:17:41.258 00:01:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:17:41.258 00:01:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:41.258 nvmf_trace.0 00:17:41.518 00:01:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:17:41.518 00:01:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 75276 00:17:41.518 00:01:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 75276 ']' 00:17:41.518 00:01:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 75276 00:17:41.518 00:01:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:41.518 00:01:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:41.518 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75276 00:17:41.518 killing process with pid 75276 00:17:41.518 Received shutdown signal, test time was about 1.000000 seconds 00:17:41.518 00:17:41.518 Latency(us) 00:17:41.518 [2024-11-19T00:01:48.210Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:41.518 [2024-11-19T00:01:48.210Z] =================================================================================================================== 00:17:41.518 [2024-11-19T00:01:48.210Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:41.518 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:41.518 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:41.518 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75276' 00:17:41.518 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 75276 00:17:41.518 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 75276 00:17:42.457 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:17:42.457 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:42.457 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:17:42.457 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:42.457 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:17:42.457 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:42.457 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:42.457 rmmod nvme_tcp 00:17:42.457 rmmod nvme_fabrics 00:17:42.457 rmmod nvme_keyring 00:17:42.457 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:42.457 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:17:42.457 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:17:42.457 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 75241 ']' 00:17:42.457 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 75241 00:17:42.457 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 75241 ']' 00:17:42.457 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 75241 00:17:42.457 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:42.457 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:42.457 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75241 00:17:42.457 killing process with pid 75241 00:17:42.457 00:01:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:42.457 00:01:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:42.457 00:01:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75241' 00:17:42.457 00:01:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 75241 00:17:42.457 00:01:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 75241 00:17:43.400 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:43.400 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:43.400 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:43.400 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:17:43.400 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:17:43.400 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:43.400 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:17:43.400 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:43.400 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:43.400 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:43.400 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:43.400 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:43.660 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:43.660 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:43.660 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:43.660 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:43.660 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:43.660 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:43.660 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:43.660 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:43.660 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:43.660 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:43.660 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:43.660 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:43.660 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:43.660 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:43.660 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@300 -- # return 0 00:17:43.660 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.xTTaKAfWxZ /tmp/tmp.MtbfundQT0 /tmp/tmp.NefXfLmuNW 00:17:43.660 00:17:43.660 real 1m47.489s 00:17:43.660 user 2m59.201s 00:17:43.660 sys 0m26.325s 00:17:43.660 ************************************ 00:17:43.660 END TEST nvmf_tls 00:17:43.660 ************************************ 00:17:43.660 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:43.660 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:43.660 00:01:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:17:43.660 00:01:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:43.660 00:01:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:43.660 00:01:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:43.660 ************************************ 00:17:43.660 START TEST nvmf_fips 00:17:43.660 ************************************ 00:17:43.660 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:17:43.920 * Looking for test storage... 00:17:43.920 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:17:43.920 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:43.920 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lcov --version 00:17:43.920 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:43.920 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:43.920 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:43.921 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:43.921 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:43.921 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:17:43.921 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:17:43.921 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:17:43.921 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:17:43.921 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:17:43.921 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:17:43.921 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:17:43.921 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:43.921 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:17:43.921 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:17:43.921 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:43.921 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:43.921 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:17:43.921 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:17:43.921 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:43.921 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:17:43.921 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:17:43.921 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:17:43.921 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:17:43.921 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:43.921 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:17:43.921 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:17:43.921 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:43.921 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:43.921 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:17:43.921 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:43.921 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:43.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:43.921 --rc genhtml_branch_coverage=1 00:17:43.921 --rc genhtml_function_coverage=1 00:17:43.921 --rc genhtml_legend=1 00:17:43.921 --rc geninfo_all_blocks=1 00:17:43.921 --rc geninfo_unexecuted_blocks=1 00:17:43.921 00:17:43.921 ' 00:17:43.921 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:43.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:43.921 --rc genhtml_branch_coverage=1 00:17:43.921 --rc genhtml_function_coverage=1 00:17:43.921 --rc genhtml_legend=1 00:17:43.921 --rc geninfo_all_blocks=1 00:17:43.921 --rc geninfo_unexecuted_blocks=1 00:17:43.921 00:17:43.921 ' 00:17:43.921 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:43.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:43.921 --rc genhtml_branch_coverage=1 00:17:43.921 --rc genhtml_function_coverage=1 00:17:43.921 --rc genhtml_legend=1 00:17:43.921 --rc geninfo_all_blocks=1 00:17:43.921 --rc geninfo_unexecuted_blocks=1 00:17:43.921 00:17:43.921 ' 00:17:43.921 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:43.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:43.921 --rc genhtml_branch_coverage=1 00:17:43.921 --rc genhtml_function_coverage=1 00:17:43.921 --rc genhtml_legend=1 00:17:43.921 --rc geninfo_all_blocks=1 00:17:43.921 --rc geninfo_unexecuted_blocks=1 00:17:43.921 00:17:43.921 ' 00:17:43.921 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:43.921 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:17:43.921 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:43.921 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:43.921 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:43.921 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:43.921 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:43.921 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:43.921 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:43.921 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:43.921 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:43.921 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:43.921 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:17:43.921 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:17:43.921 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:43.921 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:43.921 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:43.921 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:43.921 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:43.921 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:17:43.921 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:43.921 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:43.921 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:43.921 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.921 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.921 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.921 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:17:43.921 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.921 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:17:43.921 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:43.921 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:43.921 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:43.921 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:43.921 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:43.921 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:43.921 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:43.921 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:43.921 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:43.921 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:43.921 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:43.921 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:17:43.921 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:17:43.921 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:17:43.921 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:17:43.922 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:17:43.922 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:17:43.922 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:43.922 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:43.922 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:17:43.922 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:17:43.922 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:17:43.922 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:17:43.922 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:17:43.922 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:17:43.922 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:17:43.922 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:43.922 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:17:43.922 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:17:43.922 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:43.922 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:43.922 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:17:43.922 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:17:43.922 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:17:43.922 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:17:43.922 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:17:43.922 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:17:43.922 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:17:43.922 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:17:43.922 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:17:43.922 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:17:43.922 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:43.922 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:43.922 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:17:43.922 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:43.922 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:17:43.922 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:17:43.922 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:43.922 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:17:43.922 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:17:43.922 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:17:43.922 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:17:43.922 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:17:43.922 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:17:43.922 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:17:43.922 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:43.922 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:17:43.922 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:17:43.922 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:17:43.922 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:17:43.922 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:17:43.922 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:17:43.922 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:17:43.922 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:17:43.922 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:17:43.922 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:17:44.182 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:17:44.182 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:17:44.182 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:17:44.182 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:17:44.182 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:17:44.182 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:17:44.182 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:17:44.182 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:17:44.182 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:17:44.182 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:17:44.182 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:17:44.182 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:17:44.182 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:17:44.182 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:17:44.182 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:17:44.182 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:44.182 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:17:44.182 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:44.182 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:17:44.182 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:44.182 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:17:44.182 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:17:44.182 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:17:44.182 Error setting digest 00:17:44.182 404205587F7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:17:44.182 404205587F7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:17:44.182 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:17:44.182 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:44.182 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:44.182 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:44.182 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:17:44.182 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:44.182 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:44.182 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:44.182 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:44.182 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:44.182 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:44.182 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:44.182 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:44.182 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:44.182 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:44.182 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:44.182 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:44.182 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:44.182 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:44.182 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:44.182 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:44.182 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:44.182 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:44.182 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:44.182 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:44.182 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:44.182 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:44.182 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:44.182 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:44.182 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:44.182 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:44.182 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:44.182 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:44.182 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:44.182 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:44.182 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:44.182 Cannot find device "nvmf_init_br" 00:17:44.182 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # true 00:17:44.182 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:44.182 Cannot find device "nvmf_init_br2" 00:17:44.182 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # true 00:17:44.182 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:44.182 Cannot find device "nvmf_tgt_br" 00:17:44.182 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # true 00:17:44.182 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:44.182 Cannot find device "nvmf_tgt_br2" 00:17:44.182 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # true 00:17:44.182 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:44.182 Cannot find device "nvmf_init_br" 00:17:44.182 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # true 00:17:44.182 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:44.182 Cannot find device "nvmf_init_br2" 00:17:44.182 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # true 00:17:44.182 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:44.182 Cannot find device "nvmf_tgt_br" 00:17:44.182 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # true 00:17:44.182 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:44.182 Cannot find device "nvmf_tgt_br2" 00:17:44.182 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # true 00:17:44.182 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:44.182 Cannot find device "nvmf_br" 00:17:44.182 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # true 00:17:44.182 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:44.182 Cannot find device "nvmf_init_if" 00:17:44.182 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # true 00:17:44.183 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:44.183 Cannot find device "nvmf_init_if2" 00:17:44.183 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # true 00:17:44.183 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:44.183 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:44.183 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # true 00:17:44.183 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:44.183 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:44.183 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # true 00:17:44.183 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:44.183 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:44.183 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:44.442 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:44.442 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:44.442 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:44.442 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:44.442 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:44.442 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:44.442 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:44.442 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:44.442 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:44.442 00:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:44.442 00:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:44.442 00:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:44.442 00:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:44.442 00:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:44.442 00:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:44.442 00:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:44.442 00:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:44.442 00:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:44.442 00:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:44.442 00:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:44.442 00:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:44.442 00:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:44.442 00:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:44.442 00:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:44.442 00:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:44.442 00:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:44.442 00:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:44.442 00:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:44.442 00:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:44.442 00:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:44.442 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:44.442 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.115 ms 00:17:44.442 00:17:44.442 --- 10.0.0.3 ping statistics --- 00:17:44.442 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:44.443 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:17:44.443 00:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:44.443 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:44.443 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.048 ms 00:17:44.443 00:17:44.443 --- 10.0.0.4 ping statistics --- 00:17:44.443 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:44.443 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:17:44.443 00:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:44.443 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:44.443 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:17:44.443 00:17:44.443 --- 10.0.0.1 ping statistics --- 00:17:44.443 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:44.443 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:17:44.443 00:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:44.702 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:44.702 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:17:44.702 00:17:44.702 --- 10.0.0.2 ping statistics --- 00:17:44.702 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:44.702 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:17:44.702 00:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:44.702 00:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@461 -- # return 0 00:17:44.702 00:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:44.702 00:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:44.702 00:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:44.702 00:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:44.702 00:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:44.702 00:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:44.702 00:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:44.702 00:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:17:44.702 00:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:44.702 00:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:44.702 00:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:17:44.702 00:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=75611 00:17:44.702 00:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 75611 00:17:44.702 00:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:44.702 00:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 75611 ']' 00:17:44.702 00:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:44.702 00:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:44.702 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:44.702 00:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:44.702 00:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:44.702 00:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:17:44.702 [2024-11-19 00:01:51.333157] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:17:44.702 [2024-11-19 00:01:51.333329] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:44.961 [2024-11-19 00:01:51.525556] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:45.220 [2024-11-19 00:01:51.656550] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:45.220 [2024-11-19 00:01:51.656649] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:45.220 [2024-11-19 00:01:51.656699] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:45.220 [2024-11-19 00:01:51.656747] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:45.220 [2024-11-19 00:01:51.656775] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:45.220 [2024-11-19 00:01:51.658640] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:45.220 [2024-11-19 00:01:51.865554] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:45.789 00:01:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:45.789 00:01:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:17:45.789 00:01:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:45.789 00:01:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:45.789 00:01:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:17:45.789 00:01:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:45.789 00:01:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:17:45.789 00:01:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:17:45.789 00:01:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:17:45.789 00:01:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.t3Z 00:17:45.789 00:01:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:17:45.789 00:01:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.t3Z 00:17:45.789 00:01:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.t3Z 00:17:45.789 00:01:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.t3Z 00:17:45.789 00:01:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:46.048 [2024-11-19 00:01:52.608832] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:46.048 [2024-11-19 00:01:52.624825] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:46.048 [2024-11-19 00:01:52.625313] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:46.048 malloc0 00:17:46.048 00:01:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:46.048 00:01:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=75653 00:17:46.048 00:01:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 75653 /var/tmp/bdevperf.sock 00:17:46.048 00:01:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 75653 ']' 00:17:46.048 00:01:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:46.048 00:01:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:46.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:46.048 00:01:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:46.048 00:01:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:46.048 00:01:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:17:46.048 00:01:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:46.308 [2024-11-19 00:01:52.869431] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:17:46.308 [2024-11-19 00:01:52.869657] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75653 ] 00:17:46.568 [2024-11-19 00:01:53.057902] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:46.568 [2024-11-19 00:01:53.188592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:46.827 [2024-11-19 00:01:53.376306] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:47.394 00:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:47.394 00:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:17:47.394 00:01:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.t3Z 00:17:47.394 00:01:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:17:47.653 [2024-11-19 00:01:54.290161] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:47.911 TLSTESTn1 00:17:47.911 00:01:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:47.911 Running I/O for 10 seconds... 00:17:50.228 2892.00 IOPS, 11.30 MiB/s [2024-11-19T00:01:57.857Z] 2956.00 IOPS, 11.55 MiB/s [2024-11-19T00:01:58.793Z] 2984.33 IOPS, 11.66 MiB/s [2024-11-19T00:01:59.728Z] 2999.50 IOPS, 11.72 MiB/s [2024-11-19T00:02:00.666Z] 3010.40 IOPS, 11.76 MiB/s [2024-11-19T00:02:01.602Z] 3015.33 IOPS, 11.78 MiB/s [2024-11-19T00:02:02.564Z] 3032.43 IOPS, 11.85 MiB/s [2024-11-19T00:02:03.943Z] 3092.00 IOPS, 12.08 MiB/s [2024-11-19T00:02:04.883Z] 3146.56 IOPS, 12.29 MiB/s [2024-11-19T00:02:04.883Z] 3186.00 IOPS, 12.45 MiB/s 00:17:58.191 Latency(us) 00:17:58.191 [2024-11-19T00:02:04.883Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:58.191 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:58.191 Verification LBA range: start 0x0 length 0x2000 00:17:58.191 TLSTESTn1 : 10.02 3192.39 12.47 0.00 0.00 40025.55 6464.23 39083.29 00:17:58.191 [2024-11-19T00:02:04.883Z] =================================================================================================================== 00:17:58.191 [2024-11-19T00:02:04.883Z] Total : 3192.39 12.47 0.00 0.00 40025.55 6464.23 39083.29 00:17:58.191 { 00:17:58.191 "results": [ 00:17:58.191 { 00:17:58.191 "job": "TLSTESTn1", 00:17:58.191 "core_mask": "0x4", 00:17:58.191 "workload": "verify", 00:17:58.191 "status": "finished", 00:17:58.191 "verify_range": { 00:17:58.191 "start": 0, 00:17:58.191 "length": 8192 00:17:58.191 }, 00:17:58.191 "queue_depth": 128, 00:17:58.191 "io_size": 4096, 00:17:58.191 "runtime": 10.020088, 00:17:58.191 "iops": 3192.3871327277766, 00:17:58.191 "mibps": 12.470262237217877, 00:17:58.191 "io_failed": 0, 00:17:58.191 "io_timeout": 0, 00:17:58.191 "avg_latency_us": 40025.54886809827, 00:17:58.191 "min_latency_us": 6464.232727272727, 00:17:58.191 "max_latency_us": 39083.28727272727 00:17:58.191 } 00:17:58.191 ], 00:17:58.191 "core_count": 1 00:17:58.191 } 00:17:58.191 00:02:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:17:58.191 00:02:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:17:58.191 00:02:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:17:58.191 00:02:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:17:58.191 00:02:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:17:58.191 00:02:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:58.191 00:02:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:17:58.191 00:02:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:17:58.191 00:02:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:17:58.191 00:02:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:58.191 nvmf_trace.0 00:17:58.191 00:02:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:17:58.191 00:02:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 75653 00:17:58.192 00:02:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 75653 ']' 00:17:58.192 00:02:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 75653 00:17:58.192 00:02:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:17:58.192 00:02:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:58.192 00:02:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75653 00:17:58.192 00:02:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:58.192 00:02:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:58.192 killing process with pid 75653 00:17:58.192 00:02:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75653' 00:17:58.192 00:02:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 75653 00:17:58.192 Received shutdown signal, test time was about 10.000000 seconds 00:17:58.192 00:17:58.192 Latency(us) 00:17:58.192 [2024-11-19T00:02:04.884Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:58.192 [2024-11-19T00:02:04.884Z] =================================================================================================================== 00:17:58.192 [2024-11-19T00:02:04.884Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:58.192 00:02:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 75653 00:17:59.130 00:02:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:17:59.130 00:02:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:59.130 00:02:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:17:59.130 00:02:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:59.130 00:02:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:17:59.130 00:02:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:59.130 00:02:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:59.130 rmmod nvme_tcp 00:17:59.130 rmmod nvme_fabrics 00:17:59.130 rmmod nvme_keyring 00:17:59.130 00:02:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:59.130 00:02:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:17:59.130 00:02:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:17:59.130 00:02:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 75611 ']' 00:17:59.130 00:02:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 75611 00:17:59.130 00:02:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 75611 ']' 00:17:59.130 00:02:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 75611 00:17:59.130 00:02:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:17:59.130 00:02:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:59.130 00:02:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75611 00:17:59.130 00:02:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:59.130 00:02:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:59.130 killing process with pid 75611 00:17:59.130 00:02:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75611' 00:17:59.130 00:02:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 75611 00:17:59.130 00:02:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 75611 00:18:00.069 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:00.069 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:00.069 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:00.069 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:18:00.069 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:00.069 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:18:00.069 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:18:00.069 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:00.069 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:00.069 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:00.069 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:00.069 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:00.069 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:00.069 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:00.069 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:00.069 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:00.069 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:00.069 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:00.069 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:00.069 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:00.328 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:00.328 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:00.328 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:00.328 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:00.328 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:00.328 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:00.328 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@300 -- # return 0 00:18:00.328 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.t3Z 00:18:00.328 00:18:00.328 real 0m16.514s 00:18:00.328 user 0m23.902s 00:18:00.328 sys 0m5.469s 00:18:00.328 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:00.328 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:00.328 ************************************ 00:18:00.328 END TEST nvmf_fips 00:18:00.328 ************************************ 00:18:00.328 00:02:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:18:00.328 00:02:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:00.328 00:02:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:00.328 00:02:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:00.328 ************************************ 00:18:00.328 START TEST nvmf_control_msg_list 00:18:00.328 ************************************ 00:18:00.328 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:18:00.328 * Looking for test storage... 00:18:00.328 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:00.328 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:00.328 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lcov --version 00:18:00.328 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:00.589 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:00.589 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:00.589 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:00.589 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:00.589 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:18:00.589 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:18:00.589 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:18:00.589 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:18:00.589 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:18:00.589 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:18:00.589 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:18:00.589 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:00.589 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:18:00.589 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:18:00.590 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:00.590 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:00.590 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:18:00.590 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:18:00.590 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:00.590 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:18:00.590 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:18:00.590 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:18:00.590 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:18:00.590 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:00.590 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:18:00.590 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:18:00.590 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:00.590 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:00.590 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:18:00.590 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:00.590 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:00.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:00.590 --rc genhtml_branch_coverage=1 00:18:00.590 --rc genhtml_function_coverage=1 00:18:00.590 --rc genhtml_legend=1 00:18:00.590 --rc geninfo_all_blocks=1 00:18:00.590 --rc geninfo_unexecuted_blocks=1 00:18:00.590 00:18:00.590 ' 00:18:00.590 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:00.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:00.590 --rc genhtml_branch_coverage=1 00:18:00.590 --rc genhtml_function_coverage=1 00:18:00.590 --rc genhtml_legend=1 00:18:00.590 --rc geninfo_all_blocks=1 00:18:00.590 --rc geninfo_unexecuted_blocks=1 00:18:00.590 00:18:00.590 ' 00:18:00.590 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:00.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:00.590 --rc genhtml_branch_coverage=1 00:18:00.590 --rc genhtml_function_coverage=1 00:18:00.590 --rc genhtml_legend=1 00:18:00.590 --rc geninfo_all_blocks=1 00:18:00.590 --rc geninfo_unexecuted_blocks=1 00:18:00.590 00:18:00.590 ' 00:18:00.590 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:00.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:00.590 --rc genhtml_branch_coverage=1 00:18:00.590 --rc genhtml_function_coverage=1 00:18:00.590 --rc genhtml_legend=1 00:18:00.590 --rc geninfo_all_blocks=1 00:18:00.590 --rc geninfo_unexecuted_blocks=1 00:18:00.590 00:18:00.590 ' 00:18:00.590 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:00.590 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:18:00.590 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:00.590 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:00.590 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:00.590 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:00.590 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:00.590 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:00.590 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:00.590 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:00.590 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:00.590 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:00.590 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:18:00.590 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:18:00.590 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:00.590 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:00.590 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:00.590 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:00.590 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:00.590 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:18:00.590 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:00.590 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:00.590 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:00.590 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.590 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.590 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.590 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:18:00.590 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.590 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:18:00.590 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:00.590 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:00.590 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:00.590 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:00.590 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:00.590 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:00.590 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:00.590 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:00.590 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:00.590 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:00.590 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:18:00.590 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:00.590 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:00.590 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:00.590 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:00.590 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:00.590 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:00.590 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:00.590 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:00.590 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:00.590 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:00.590 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:00.590 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:00.590 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:00.590 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:00.590 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:00.590 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:00.590 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:00.590 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:00.590 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:00.590 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:00.590 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:00.590 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:00.590 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:00.590 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:00.590 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:00.590 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:00.590 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:00.590 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:00.590 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:00.590 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:00.590 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:00.590 Cannot find device "nvmf_init_br" 00:18:00.590 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # true 00:18:00.590 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:00.590 Cannot find device "nvmf_init_br2" 00:18:00.590 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # true 00:18:00.590 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:00.590 Cannot find device "nvmf_tgt_br" 00:18:00.590 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # true 00:18:00.590 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:00.590 Cannot find device "nvmf_tgt_br2" 00:18:00.590 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # true 00:18:00.590 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:00.590 Cannot find device "nvmf_init_br" 00:18:00.590 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # true 00:18:00.590 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:00.590 Cannot find device "nvmf_init_br2" 00:18:00.590 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # true 00:18:00.590 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:00.590 Cannot find device "nvmf_tgt_br" 00:18:00.590 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # true 00:18:00.590 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:00.590 Cannot find device "nvmf_tgt_br2" 00:18:00.590 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # true 00:18:00.590 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:00.590 Cannot find device "nvmf_br" 00:18:00.590 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # true 00:18:00.590 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:00.590 Cannot find device "nvmf_init_if" 00:18:00.590 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # true 00:18:00.590 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:00.590 Cannot find device "nvmf_init_if2" 00:18:00.590 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # true 00:18:00.590 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:00.590 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:00.590 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # true 00:18:00.590 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:00.590 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:00.590 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # true 00:18:00.590 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:00.590 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:00.590 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:00.590 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:00.590 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:00.850 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:00.850 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:00.850 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:00.850 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:00.850 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:00.850 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:00.850 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:00.850 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:00.850 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:00.850 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:00.850 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:00.850 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:00.850 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:00.850 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:00.850 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:00.850 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:00.850 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:00.850 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:00.850 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:00.850 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:00.850 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:00.850 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:00.850 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:00.850 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:00.850 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:00.850 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:00.850 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:00.850 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:00.850 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:00.850 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.083 ms 00:18:00.850 00:18:00.850 --- 10.0.0.3 ping statistics --- 00:18:00.850 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:00.850 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:18:00.850 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:00.850 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:00.851 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.052 ms 00:18:00.851 00:18:00.851 --- 10.0.0.4 ping statistics --- 00:18:00.851 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:00.851 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:18:00.851 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:00.851 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:00.851 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:18:00.851 00:18:00.851 --- 10.0.0.1 ping statistics --- 00:18:00.851 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:00.851 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:18:00.851 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:00.851 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:00.851 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.046 ms 00:18:00.851 00:18:00.851 --- 10.0.0.2 ping statistics --- 00:18:00.851 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:00.851 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:18:00.851 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:00.851 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@461 -- # return 0 00:18:00.851 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:00.851 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:00.851 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:00.851 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:00.851 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:00.851 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:00.851 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:00.851 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:18:00.851 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:00.851 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:00.851 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:18:01.110 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=76056 00:18:01.110 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:01.110 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 76056 00:18:01.110 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 76056 ']' 00:18:01.110 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:01.110 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:01.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:01.110 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:01.110 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:01.110 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:18:01.110 [2024-11-19 00:02:07.663043] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:18:01.110 [2024-11-19 00:02:07.663220] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:01.369 [2024-11-19 00:02:07.844904] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:01.369 [2024-11-19 00:02:07.925266] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:01.369 [2024-11-19 00:02:07.925359] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:01.369 [2024-11-19 00:02:07.925393] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:01.369 [2024-11-19 00:02:07.925414] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:01.369 [2024-11-19 00:02:07.925426] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:01.369 [2024-11-19 00:02:07.926468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:01.628 [2024-11-19 00:02:08.088248] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:01.988 00:02:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:01.988 00:02:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:18:01.988 00:02:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:01.988 00:02:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:01.988 00:02:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:18:01.988 00:02:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:01.988 00:02:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:18:01.988 00:02:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:18:01.988 00:02:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:18:01.988 00:02:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.988 00:02:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:18:01.988 [2024-11-19 00:02:08.615121] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:01.988 00:02:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.988 00:02:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:18:01.988 00:02:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.988 00:02:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:18:01.988 00:02:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.988 00:02:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:18:01.988 00:02:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.988 00:02:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:18:01.988 Malloc0 00:18:01.988 00:02:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.988 00:02:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:18:01.988 00:02:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.988 00:02:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:18:01.988 00:02:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.988 00:02:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:18:01.988 00:02:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.988 00:02:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:18:01.988 [2024-11-19 00:02:08.672913] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:02.248 00:02:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.248 00:02:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=76088 00:18:02.248 00:02:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:02.248 00:02:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=76089 00:18:02.248 00:02:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:02.248 00:02:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=76090 00:18:02.248 00:02:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:02.248 00:02:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 76088 00:18:02.248 [2024-11-19 00:02:08.917669] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:18:02.248 [2024-11-19 00:02:08.929123] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:18:02.248 [2024-11-19 00:02:08.929658] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:18:03.628 Initializing NVMe Controllers 00:18:03.628 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:18:03.628 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:18:03.628 Initialization complete. Launching workers. 00:18:03.628 ======================================================== 00:18:03.628 Latency(us) 00:18:03.628 Device Information : IOPS MiB/s Average min max 00:18:03.628 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 2854.00 11.15 349.75 175.15 901.29 00:18:03.628 ======================================================== 00:18:03.628 Total : 2854.00 11.15 349.75 175.15 901.29 00:18:03.628 00:18:03.628 Initializing NVMe Controllers 00:18:03.628 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:18:03.628 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:18:03.628 Initialization complete. Launching workers. 00:18:03.628 ======================================================== 00:18:03.628 Latency(us) 00:18:03.628 Device Information : IOPS MiB/s Average min max 00:18:03.628 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 2866.00 11.20 348.31 200.68 993.12 00:18:03.628 ======================================================== 00:18:03.628 Total : 2866.00 11.20 348.31 200.68 993.12 00:18:03.628 00:18:03.628 Initializing NVMe Controllers 00:18:03.628 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:18:03.628 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:18:03.628 Initialization complete. Launching workers. 00:18:03.628 ======================================================== 00:18:03.628 Latency(us) 00:18:03.628 Device Information : IOPS MiB/s Average min max 00:18:03.628 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 2876.97 11.24 346.91 149.28 827.02 00:18:03.628 ======================================================== 00:18:03.628 Total : 2876.97 11.24 346.91 149.28 827.02 00:18:03.628 00:18:03.628 00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 76089 00:18:03.628 00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 76090 00:18:03.628 00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:18:03.628 00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:18:03.628 00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:03.628 00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:18:03.628 00:02:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:03.628 00:02:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:18:03.628 00:02:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:03.628 00:02:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:03.628 rmmod nvme_tcp 00:18:03.628 rmmod nvme_fabrics 00:18:03.628 rmmod nvme_keyring 00:18:03.628 00:02:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:03.628 00:02:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:18:03.628 00:02:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:18:03.628 00:02:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 76056 ']' 00:18:03.628 00:02:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 76056 00:18:03.628 00:02:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 76056 ']' 00:18:03.628 00:02:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 76056 00:18:03.628 00:02:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:18:03.628 00:02:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:03.629 00:02:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76056 00:18:03.629 00:02:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:03.629 00:02:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:03.629 killing process with pid 76056 00:18:03.629 00:02:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76056' 00:18:03.629 00:02:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 76056 00:18:03.629 00:02:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 76056 00:18:04.567 00:02:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:04.568 00:02:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:04.568 00:02:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:04.568 00:02:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:18:04.568 00:02:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:18:04.568 00:02:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:18:04.568 00:02:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:04.568 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:04.568 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:04.568 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:04.568 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:04.568 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:04.568 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:04.568 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:04.568 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:04.568 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:04.568 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:04.568 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:04.568 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:04.568 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:04.568 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:04.568 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:04.568 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:04.568 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:04.568 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:04.568 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:04.828 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@300 -- # return 0 00:18:04.828 ************************************ 00:18:04.828 END TEST nvmf_control_msg_list 00:18:04.828 ************************************ 00:18:04.828 00:18:04.828 real 0m4.372s 00:18:04.828 user 0m6.551s 00:18:04.828 sys 0m1.426s 00:18:04.828 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:04.828 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:18:04.828 00:02:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:18:04.828 00:02:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:04.828 00:02:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:04.828 00:02:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:04.828 ************************************ 00:18:04.828 START TEST nvmf_wait_for_buf 00:18:04.828 ************************************ 00:18:04.828 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:18:04.828 * Looking for test storage... 00:18:04.828 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:04.828 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:04.828 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lcov --version 00:18:04.828 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:04.828 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:04.828 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:04.828 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:04.828 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:04.828 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:18:04.828 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:18:04.828 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:18:04.828 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:18:04.828 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:18:04.828 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:18:04.828 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:18:04.828 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:04.828 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:18:04.828 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:18:04.828 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:04.828 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:04.828 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:18:04.828 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:18:04.828 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:04.828 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:18:04.828 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:18:04.828 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:18:04.828 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:18:04.828 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:04.828 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:18:04.828 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:18:04.828 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:04.828 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:04.828 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:18:04.828 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:04.828 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:04.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:04.828 --rc genhtml_branch_coverage=1 00:18:04.828 --rc genhtml_function_coverage=1 00:18:04.828 --rc genhtml_legend=1 00:18:04.828 --rc geninfo_all_blocks=1 00:18:04.828 --rc geninfo_unexecuted_blocks=1 00:18:04.828 00:18:04.828 ' 00:18:04.828 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:04.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:04.828 --rc genhtml_branch_coverage=1 00:18:04.828 --rc genhtml_function_coverage=1 00:18:04.828 --rc genhtml_legend=1 00:18:04.828 --rc geninfo_all_blocks=1 00:18:04.828 --rc geninfo_unexecuted_blocks=1 00:18:04.828 00:18:04.828 ' 00:18:04.828 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:04.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:04.828 --rc genhtml_branch_coverage=1 00:18:04.828 --rc genhtml_function_coverage=1 00:18:04.828 --rc genhtml_legend=1 00:18:04.828 --rc geninfo_all_blocks=1 00:18:04.828 --rc geninfo_unexecuted_blocks=1 00:18:04.828 00:18:04.828 ' 00:18:04.828 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:04.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:04.828 --rc genhtml_branch_coverage=1 00:18:04.828 --rc genhtml_function_coverage=1 00:18:04.828 --rc genhtml_legend=1 00:18:04.828 --rc geninfo_all_blocks=1 00:18:04.828 --rc geninfo_unexecuted_blocks=1 00:18:04.828 00:18:04.828 ' 00:18:04.828 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:04.828 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:18:05.089 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:05.089 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:05.089 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:05.089 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:05.089 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:05.089 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:05.089 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:05.089 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:05.089 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:05.089 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:05.089 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:18:05.089 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:18:05.089 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:05.089 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:05.089 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:05.089 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:05.089 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:05.089 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:18:05.089 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:05.089 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:05.089 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:05.089 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:05.089 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:05.089 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:05.089 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:18:05.089 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:05.089 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:18:05.089 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:05.089 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:05.089 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:05.089 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:05.089 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:05.089 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:05.089 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:05.089 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:05.089 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:05.089 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:05.089 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:18:05.089 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:05.089 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:05.089 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:05.089 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:05.089 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:05.089 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:05.089 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:05.089 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:05.089 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:05.089 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:05.089 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:05.089 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:05.089 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:05.089 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:05.089 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:05.089 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:05.089 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:05.089 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:05.089 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:05.089 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:05.089 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:05.089 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:05.089 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:05.089 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:05.089 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:05.089 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:05.089 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:05.089 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:05.089 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:05.089 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:05.089 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:05.089 Cannot find device "nvmf_init_br" 00:18:05.089 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # true 00:18:05.089 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:05.089 Cannot find device "nvmf_init_br2" 00:18:05.089 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # true 00:18:05.089 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:05.089 Cannot find device "nvmf_tgt_br" 00:18:05.089 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # true 00:18:05.089 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:05.089 Cannot find device "nvmf_tgt_br2" 00:18:05.089 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # true 00:18:05.089 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:05.089 Cannot find device "nvmf_init_br" 00:18:05.089 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # true 00:18:05.089 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:05.089 Cannot find device "nvmf_init_br2" 00:18:05.089 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # true 00:18:05.089 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:05.089 Cannot find device "nvmf_tgt_br" 00:18:05.089 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # true 00:18:05.089 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:05.089 Cannot find device "nvmf_tgt_br2" 00:18:05.089 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # true 00:18:05.089 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:05.089 Cannot find device "nvmf_br" 00:18:05.089 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # true 00:18:05.089 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:05.089 Cannot find device "nvmf_init_if" 00:18:05.089 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # true 00:18:05.089 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:05.089 Cannot find device "nvmf_init_if2" 00:18:05.089 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # true 00:18:05.089 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:05.089 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:05.089 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # true 00:18:05.089 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:05.089 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:05.089 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # true 00:18:05.089 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:05.089 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:05.089 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:05.090 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:05.090 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:05.090 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:05.090 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:05.090 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:05.090 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:05.090 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:05.090 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:05.349 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:05.350 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:05.350 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:05.350 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:05.350 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:05.350 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:05.350 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:05.350 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:05.350 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:05.350 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:05.350 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:05.350 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:05.350 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:05.350 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:05.350 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:05.350 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:05.350 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:05.350 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:05.350 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:05.350 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:05.350 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:05.350 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:05.350 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:05.350 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.078 ms 00:18:05.350 00:18:05.350 --- 10.0.0.3 ping statistics --- 00:18:05.350 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:05.350 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:18:05.350 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:05.350 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:05.350 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.065 ms 00:18:05.350 00:18:05.350 --- 10.0.0.4 ping statistics --- 00:18:05.350 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:05.350 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:18:05.350 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:05.350 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:05.350 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:18:05.350 00:18:05.350 --- 10.0.0.1 ping statistics --- 00:18:05.350 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:05.350 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:18:05.350 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:05.350 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:05.350 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:18:05.350 00:18:05.350 --- 10.0.0.2 ping statistics --- 00:18:05.350 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:05.350 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:18:05.350 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:05.350 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@461 -- # return 0 00:18:05.350 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:05.350 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:05.350 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:05.350 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:05.350 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:05.350 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:05.350 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:05.350 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:18:05.350 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:05.350 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:05.350 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:05.350 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=76335 00:18:05.350 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:18:05.350 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 76335 00:18:05.350 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 76335 ']' 00:18:05.350 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:05.350 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:05.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:05.350 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:05.350 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:05.350 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:05.610 [2024-11-19 00:02:12.054003] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:18:05.610 [2024-11-19 00:02:12.054149] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:05.610 [2024-11-19 00:02:12.219455] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:05.869 [2024-11-19 00:02:12.303432] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:05.869 [2024-11-19 00:02:12.303490] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:05.869 [2024-11-19 00:02:12.303523] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:05.869 [2024-11-19 00:02:12.303544] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:05.869 [2024-11-19 00:02:12.303556] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:05.869 [2024-11-19 00:02:12.304717] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:06.438 00:02:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:06.438 00:02:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:18:06.438 00:02:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:06.438 00:02:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:06.438 00:02:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:06.438 00:02:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:06.438 00:02:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:18:06.438 00:02:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:18:06.438 00:02:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:18:06.438 00:02:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.438 00:02:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:06.438 00:02:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.438 00:02:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:18:06.438 00:02:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.438 00:02:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:06.438 00:02:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.438 00:02:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:18:06.438 00:02:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.438 00:02:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:06.697 [2024-11-19 00:02:13.192045] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:06.697 00:02:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.697 00:02:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:18:06.697 00:02:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.697 00:02:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:06.697 Malloc0 00:18:06.697 00:02:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.697 00:02:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:18:06.697 00:02:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.697 00:02:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:06.697 [2024-11-19 00:02:13.335452] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:06.697 00:02:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.697 00:02:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:18:06.697 00:02:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.697 00:02:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:06.697 00:02:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.697 00:02:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:18:06.697 00:02:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.697 00:02:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:06.697 00:02:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.697 00:02:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:18:06.697 00:02:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.697 00:02:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:06.697 [2024-11-19 00:02:13.359677] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:06.697 00:02:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.697 00:02:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:06.956 [2024-11-19 00:02:13.598829] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:18:08.335 Initializing NVMe Controllers 00:18:08.335 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:18:08.335 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:18:08.335 Initialization complete. Launching workers. 00:18:08.335 ======================================================== 00:18:08.335 Latency(us) 00:18:08.335 Device Information : IOPS MiB/s Average min max 00:18:08.335 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 501.98 62.75 7967.94 6089.24 11937.41 00:18:08.335 ======================================================== 00:18:08.335 Total : 501.98 62.75 7967.94 6089.24 11937.41 00:18:08.335 00:18:08.335 00:02:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:18:08.335 00:02:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.335 00:02:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:08.335 00:02:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:18:08.335 00:02:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.335 00:02:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=4788 00:18:08.335 00:02:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 4788 -eq 0 ]] 00:18:08.335 00:02:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:18:08.335 00:02:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:18:08.335 00:02:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:08.335 00:02:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:18:08.594 00:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:08.594 00:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:18:08.594 00:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:08.594 00:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:08.594 rmmod nvme_tcp 00:18:08.594 rmmod nvme_fabrics 00:18:08.594 rmmod nvme_keyring 00:18:08.594 00:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:08.594 00:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:18:08.594 00:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:18:08.594 00:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 76335 ']' 00:18:08.594 00:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 76335 00:18:08.594 00:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 76335 ']' 00:18:08.594 00:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 76335 00:18:08.594 00:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:18:08.594 00:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:08.594 00:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76335 00:18:08.594 00:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:08.594 00:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:08.594 killing process with pid 76335 00:18:08.594 00:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76335' 00:18:08.594 00:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 76335 00:18:08.594 00:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 76335 00:18:09.532 00:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:09.532 00:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:09.532 00:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:09.532 00:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:18:09.532 00:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:18:09.532 00:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:18:09.532 00:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:09.532 00:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:09.532 00:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:09.532 00:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:09.532 00:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:09.532 00:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:09.532 00:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:09.532 00:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:09.532 00:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:09.532 00:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:09.532 00:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:09.532 00:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:09.532 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:09.532 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:09.532 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:09.532 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:09.532 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:09.532 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:09.532 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:09.532 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:09.532 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@300 -- # return 0 00:18:09.532 00:18:09.532 real 0m4.815s 00:18:09.532 user 0m4.325s 00:18:09.532 sys 0m0.909s 00:18:09.532 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:09.532 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:09.532 ************************************ 00:18:09.532 END TEST nvmf_wait_for_buf 00:18:09.532 ************************************ 00:18:09.532 00:02:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 1 -eq 1 ']' 00:18:09.532 00:02:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:18:09.532 00:02:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:09.532 00:02:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:09.532 00:02:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:09.532 ************************************ 00:18:09.532 START TEST nvmf_fuzz 00:18:09.532 ************************************ 00:18:09.532 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:18:09.792 * Looking for test storage... 00:18:09.793 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:09.793 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:09.793 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:18:09.793 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:09.793 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:09.793 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:09.793 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:09.793 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:09.793 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:18:09.793 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:18:09.793 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:18:09.793 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:18:09.793 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:18:09.793 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:18:09.793 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:18:09.793 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:09.793 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:18:09.793 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@345 -- # : 1 00:18:09.793 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:09.793 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:09.793 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # decimal 1 00:18:09.793 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=1 00:18:09.793 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:09.793 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 1 00:18:09.793 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:18:09.793 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # decimal 2 00:18:09.793 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=2 00:18:09.793 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:09.793 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 2 00:18:09.793 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:18:09.793 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:09.793 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:09.793 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # return 0 00:18:09.793 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:09.793 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:09.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:09.793 --rc genhtml_branch_coverage=1 00:18:09.793 --rc genhtml_function_coverage=1 00:18:09.793 --rc genhtml_legend=1 00:18:09.793 --rc geninfo_all_blocks=1 00:18:09.793 --rc geninfo_unexecuted_blocks=1 00:18:09.793 00:18:09.793 ' 00:18:09.793 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:09.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:09.793 --rc genhtml_branch_coverage=1 00:18:09.793 --rc genhtml_function_coverage=1 00:18:09.793 --rc genhtml_legend=1 00:18:09.793 --rc geninfo_all_blocks=1 00:18:09.793 --rc geninfo_unexecuted_blocks=1 00:18:09.793 00:18:09.793 ' 00:18:09.793 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:09.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:09.793 --rc genhtml_branch_coverage=1 00:18:09.793 --rc genhtml_function_coverage=1 00:18:09.793 --rc genhtml_legend=1 00:18:09.793 --rc geninfo_all_blocks=1 00:18:09.793 --rc geninfo_unexecuted_blocks=1 00:18:09.793 00:18:09.793 ' 00:18:09.793 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:09.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:09.793 --rc genhtml_branch_coverage=1 00:18:09.793 --rc genhtml_function_coverage=1 00:18:09.793 --rc genhtml_legend=1 00:18:09.793 --rc geninfo_all_blocks=1 00:18:09.793 --rc geninfo_unexecuted_blocks=1 00:18:09.793 00:18:09.793 ' 00:18:09.793 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:09.793 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:18:09.793 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:09.793 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:09.793 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:09.793 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:09.793 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:09.793 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:09.793 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:09.793 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:09.793 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:09.793 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:09.793 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:18:09.793 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:18:09.793 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:09.793 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:09.793 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:09.793 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:09.793 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:09.793 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:18:09.793 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:09.793 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:09.793 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:09.793 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.793 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.793 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.793 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:18:09.793 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.793 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # : 0 00:18:09.793 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:09.793 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:09.793 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:09.793 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:09.793 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:09.793 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:09.793 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:09.793 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:09.793 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:09.793 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:09.793 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:18:09.793 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:09.793 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:09.793 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:09.794 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:09.794 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:09.794 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:09.794 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:09.794 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:09.794 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:09.794 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:09.794 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:09.794 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:09.794 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:09.794 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:09.794 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:09.794 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:09.794 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:09.794 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:09.794 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:09.794 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:09.794 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:09.794 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:09.794 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:09.794 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:09.794 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:09.794 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:09.794 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:09.794 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:09.794 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:09.794 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:09.794 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:09.794 Cannot find device "nvmf_init_br" 00:18:09.794 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@162 -- # true 00:18:09.794 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:09.794 Cannot find device "nvmf_init_br2" 00:18:09.794 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@163 -- # true 00:18:09.794 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:09.794 Cannot find device "nvmf_tgt_br" 00:18:09.794 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@164 -- # true 00:18:09.794 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:09.794 Cannot find device "nvmf_tgt_br2" 00:18:09.794 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@165 -- # true 00:18:09.794 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:09.794 Cannot find device "nvmf_init_br" 00:18:09.794 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@166 -- # true 00:18:09.794 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:09.794 Cannot find device "nvmf_init_br2" 00:18:09.794 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@167 -- # true 00:18:09.794 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:10.053 Cannot find device "nvmf_tgt_br" 00:18:10.053 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@168 -- # true 00:18:10.053 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:10.053 Cannot find device "nvmf_tgt_br2" 00:18:10.053 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@169 -- # true 00:18:10.053 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:10.053 Cannot find device "nvmf_br" 00:18:10.053 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@170 -- # true 00:18:10.053 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:10.053 Cannot find device "nvmf_init_if" 00:18:10.053 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@171 -- # true 00:18:10.053 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:10.053 Cannot find device "nvmf_init_if2" 00:18:10.053 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@172 -- # true 00:18:10.053 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:10.053 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:10.053 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@173 -- # true 00:18:10.053 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:10.053 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:10.053 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@174 -- # true 00:18:10.053 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:10.054 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:10.054 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:10.054 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:10.054 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:10.054 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:10.054 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:10.054 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:10.054 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:10.054 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:10.054 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:10.054 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:10.054 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:10.054 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:10.054 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:10.054 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:10.054 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:10.054 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:10.054 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:10.054 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:10.054 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:10.054 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:10.054 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:10.313 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:10.313 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:10.313 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:10.313 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:10.313 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:10.313 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:10.313 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:10.313 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:10.313 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:10.313 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:10.313 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:10.313 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.101 ms 00:18:10.313 00:18:10.313 --- 10.0.0.3 ping statistics --- 00:18:10.313 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:10.313 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:18:10.313 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:10.313 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:10.313 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.056 ms 00:18:10.313 00:18:10.313 --- 10.0.0.4 ping statistics --- 00:18:10.313 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:10.313 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:18:10.313 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:10.313 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:10.313 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:18:10.313 00:18:10.313 --- 10.0.0.1 ping statistics --- 00:18:10.313 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:10.313 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:18:10.313 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:10.313 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:10.313 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.046 ms 00:18:10.313 00:18:10.313 --- 10.0.0.2 ping statistics --- 00:18:10.313 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:10.313 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:18:10.313 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:10.313 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@461 -- # return 0 00:18:10.313 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:10.313 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:10.313 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:10.313 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:10.313 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:10.313 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:10.313 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:10.313 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=76641 00:18:10.313 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:10.313 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:18:10.313 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 76641 00:18:10.313 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@835 -- # '[' -z 76641 ']' 00:18:10.314 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:10.314 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:10.314 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:10.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:10.314 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:10.314 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:11.251 00:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:11.251 00:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@868 -- # return 0 00:18:11.251 00:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:11.251 00:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.251 00:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:11.251 00:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.252 00:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:18:11.252 00:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.252 00:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:11.511 Malloc0 00:18:11.511 00:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.511 00:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:11.511 00:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.511 00:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:11.511 00:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.511 00:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:11.511 00:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.511 00:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:11.511 00:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.511 00:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:11.511 00:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.511 00:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:11.511 00:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.511 00:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420' 00:18:11.511 00:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420' -N -a 00:18:12.078 Shutting down the fuzz application 00:18:12.078 00:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420' -j /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:18:12.653 Shutting down the fuzz application 00:18:12.653 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:12.653 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.653 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:12.653 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.653 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:18:12.653 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:18:12.653 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:12.653 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # sync 00:18:12.653 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:12.653 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set +e 00:18:12.653 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:12.653 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:12.653 rmmod nvme_tcp 00:18:12.653 rmmod nvme_fabrics 00:18:12.653 rmmod nvme_keyring 00:18:12.653 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:12.653 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@128 -- # set -e 00:18:12.653 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@129 -- # return 0 00:18:12.653 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@517 -- # '[' -n 76641 ']' 00:18:12.653 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@518 -- # killprocess 76641 00:18:12.653 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # '[' -z 76641 ']' 00:18:12.653 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@958 -- # kill -0 76641 00:18:12.653 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # uname 00:18:12.653 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:12.653 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76641 00:18:12.653 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:12.653 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:12.653 killing process with pid 76641 00:18:12.653 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76641' 00:18:12.653 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@973 -- # kill 76641 00:18:12.653 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@978 -- # wait 76641 00:18:13.588 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:13.588 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:13.588 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:13.588 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # iptr 00:18:13.588 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-save 00:18:13.588 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:13.588 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-restore 00:18:13.588 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:13.588 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:13.588 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:13.848 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:13.848 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:13.848 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:13.848 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:13.849 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:13.849 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:13.849 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:13.849 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:13.849 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:13.849 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:13.849 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:13.849 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:13.849 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:13.849 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:13.849 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:13.849 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:13.849 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@300 -- # return 0 00:18:13.849 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs1.txt /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs2.txt 00:18:13.849 00:18:13.849 real 0m4.347s 00:18:13.849 user 0m4.557s 00:18:13.849 sys 0m0.854s 00:18:13.849 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:13.849 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:13.849 ************************************ 00:18:13.849 END TEST nvmf_fuzz 00:18:13.849 ************************************ 00:18:14.110 00:02:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@49 -- # run_test nvmf_multiconnection /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:18:14.110 00:02:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:14.110 00:02:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:14.110 00:02:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:14.110 ************************************ 00:18:14.110 START TEST nvmf_multiconnection 00:18:14.110 ************************************ 00:18:14.110 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:18:14.110 * Looking for test storage... 00:18:14.110 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:14.110 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:14.110 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1693 -- # lcov --version 00:18:14.110 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:14.110 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:14.110 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:14.110 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:14.110 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:14.110 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # IFS=.-: 00:18:14.110 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # read -ra ver1 00:18:14.110 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # IFS=.-: 00:18:14.110 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # read -ra ver2 00:18:14.110 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@338 -- # local 'op=<' 00:18:14.110 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@340 -- # ver1_l=2 00:18:14.110 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@341 -- # ver2_l=1 00:18:14.110 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:14.110 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@344 -- # case "$op" in 00:18:14.110 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@345 -- # : 1 00:18:14.110 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:14.110 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:14.110 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # decimal 1 00:18:14.110 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=1 00:18:14.110 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:14.110 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 1 00:18:14.110 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # ver1[v]=1 00:18:14.110 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # decimal 2 00:18:14.110 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=2 00:18:14.110 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:14.110 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 2 00:18:14.110 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # ver2[v]=2 00:18:14.110 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:14.110 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:14.110 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # return 0 00:18:14.110 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:14.110 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:14.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:14.111 --rc genhtml_branch_coverage=1 00:18:14.111 --rc genhtml_function_coverage=1 00:18:14.111 --rc genhtml_legend=1 00:18:14.111 --rc geninfo_all_blocks=1 00:18:14.111 --rc geninfo_unexecuted_blocks=1 00:18:14.111 00:18:14.111 ' 00:18:14.111 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:14.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:14.111 --rc genhtml_branch_coverage=1 00:18:14.111 --rc genhtml_function_coverage=1 00:18:14.111 --rc genhtml_legend=1 00:18:14.111 --rc geninfo_all_blocks=1 00:18:14.111 --rc geninfo_unexecuted_blocks=1 00:18:14.111 00:18:14.111 ' 00:18:14.111 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:14.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:14.111 --rc genhtml_branch_coverage=1 00:18:14.111 --rc genhtml_function_coverage=1 00:18:14.111 --rc genhtml_legend=1 00:18:14.111 --rc geninfo_all_blocks=1 00:18:14.111 --rc geninfo_unexecuted_blocks=1 00:18:14.111 00:18:14.111 ' 00:18:14.111 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:14.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:14.111 --rc genhtml_branch_coverage=1 00:18:14.111 --rc genhtml_function_coverage=1 00:18:14.111 --rc genhtml_legend=1 00:18:14.111 --rc geninfo_all_blocks=1 00:18:14.111 --rc geninfo_unexecuted_blocks=1 00:18:14.111 00:18:14.111 ' 00:18:14.111 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:14.111 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:18:14.111 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:14.111 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:14.111 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:14.111 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:14.111 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:14.111 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:14.111 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:14.111 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:14.111 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:14.111 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:14.111 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:18:14.111 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:18:14.111 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:14.111 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:14.111 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:14.111 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:14.111 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:14.111 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@15 -- # shopt -s extglob 00:18:14.111 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:14.111 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:14.111 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:14.111 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.111 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.111 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.111 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:18:14.111 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.111 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # : 0 00:18:14.111 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:14.111 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:14.111 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:14.111 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:14.111 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:14.111 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:14.111 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:14.111 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:14.111 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:14.111 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:14.111 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:14.111 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:14.111 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:18:14.111 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:18:14.111 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:14.385 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:14.385 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:14.385 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:14.385 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:14.385 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:14.385 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:14.385 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:14.385 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:14.385 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:14.385 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:14.386 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:14.386 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:14.386 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:14.386 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:14.386 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:14.386 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:14.386 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:14.386 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:14.386 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:14.386 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:14.386 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:14.386 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:14.386 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:14.386 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:14.386 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:14.386 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:14.386 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:14.386 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:14.386 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:14.386 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:14.386 Cannot find device "nvmf_init_br" 00:18:14.386 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@162 -- # true 00:18:14.386 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:14.386 Cannot find device "nvmf_init_br2" 00:18:14.386 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@163 -- # true 00:18:14.386 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:14.386 Cannot find device "nvmf_tgt_br" 00:18:14.386 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@164 -- # true 00:18:14.386 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:14.386 Cannot find device "nvmf_tgt_br2" 00:18:14.386 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@165 -- # true 00:18:14.386 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:14.386 Cannot find device "nvmf_init_br" 00:18:14.386 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@166 -- # true 00:18:14.386 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:14.386 Cannot find device "nvmf_init_br2" 00:18:14.386 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@167 -- # true 00:18:14.386 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:14.386 Cannot find device "nvmf_tgt_br" 00:18:14.386 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@168 -- # true 00:18:14.386 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:14.386 Cannot find device "nvmf_tgt_br2" 00:18:14.386 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@169 -- # true 00:18:14.386 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:14.386 Cannot find device "nvmf_br" 00:18:14.386 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@170 -- # true 00:18:14.386 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:14.386 Cannot find device "nvmf_init_if" 00:18:14.386 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@171 -- # true 00:18:14.386 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:14.386 Cannot find device "nvmf_init_if2" 00:18:14.386 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@172 -- # true 00:18:14.386 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:14.386 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:14.386 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@173 -- # true 00:18:14.386 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:14.386 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:14.386 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@174 -- # true 00:18:14.386 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:14.386 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:14.386 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:14.386 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:14.386 00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:14.386 00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:14.386 00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:14.386 00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:14.386 00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:14.386 00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:14.386 00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:14.386 00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:14.386 00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:14.386 00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:14.659 00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:14.659 00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:14.659 00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:14.659 00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:14.659 00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:14.659 00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:14.659 00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:14.659 00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:14.659 00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:14.659 00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:14.659 00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:14.659 00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:14.659 00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:14.659 00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:14.659 00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:14.659 00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:14.659 00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:14.659 00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:14.659 00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:14.659 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:14.659 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:18:14.659 00:18:14.659 --- 10.0.0.3 ping statistics --- 00:18:14.659 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:14.659 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:18:14.659 00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:14.659 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:14.659 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.055 ms 00:18:14.659 00:18:14.659 --- 10.0.0.4 ping statistics --- 00:18:14.659 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:14.659 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:18:14.659 00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:14.659 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:14.659 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:18:14.659 00:18:14.659 --- 10.0.0.1 ping statistics --- 00:18:14.659 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:14.659 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:18:14.659 00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:14.659 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:14.659 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:18:14.659 00:18:14.659 --- 10.0.0.2 ping statistics --- 00:18:14.659 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:14.659 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:18:14.659 00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:14.659 00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@461 -- # return 0 00:18:14.659 00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:14.659 00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:14.659 00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:14.659 00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:14.659 00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:14.660 00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:14.660 00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:14.660 00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:18:14.660 00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:14.660 00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:14.660 00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:14.660 00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@509 -- # nvmfpid=76898 00:18:14.660 00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:14.660 00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@510 -- # waitforlisten 76898 00:18:14.660 00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@835 -- # '[' -z 76898 ']' 00:18:14.660 00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:14.660 00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:14.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:14.660 00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:14.660 00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:14.660 00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:14.660 [2024-11-19 00:02:21.345297] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:18:14.660 [2024-11-19 00:02:21.345457] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:14.919 [2024-11-19 00:02:21.513475] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:14.919 [2024-11-19 00:02:21.606287] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:14.919 [2024-11-19 00:02:21.606363] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:14.919 [2024-11-19 00:02:21.606380] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:14.919 [2024-11-19 00:02:21.606391] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:14.919 [2024-11-19 00:02:21.606403] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:15.178 [2024-11-19 00:02:21.608321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:15.178 [2024-11-19 00:02:21.610670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:15.178 [2024-11-19 00:02:21.610996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:15.178 [2024-11-19 00:02:21.611396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:15.178 [2024-11-19 00:02:21.782179] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:15.747 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:15.747 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@868 -- # return 0 00:18:15.747 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:15.747 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:15.747 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:15.747 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:15.747 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:15.747 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.747 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:15.747 [2024-11-19 00:02:22.340086] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:15.747 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.747 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:18:15.747 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:15.747 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:15.747 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.747 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:16.007 Malloc1 00:18:16.007 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.007 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:18:16.007 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.007 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:16.007 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.007 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:16.007 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.007 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:16.007 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.007 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:16.007 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.007 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:16.007 [2024-11-19 00:02:22.460592] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:16.007 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.007 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:16.007 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:18:16.007 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.007 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:16.007 Malloc2 00:18:16.007 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.007 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:18:16.007 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.007 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:16.007 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.007 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:18:16.007 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.007 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:16.007 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.007 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:18:16.007 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.007 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:16.007 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.007 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:16.007 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:18:16.007 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.007 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:16.007 Malloc3 00:18:16.007 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.007 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:18:16.007 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.007 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:16.007 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.007 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:18:16.007 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.007 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:16.007 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.007 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.3 -s 4420 00:18:16.007 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.007 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:16.007 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.007 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:16.007 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:18:16.007 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.007 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:16.267 Malloc4 00:18:16.267 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.267 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:18:16.267 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.267 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:16.267 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.267 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:18:16.267 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.267 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:16.267 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.267 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.3 -s 4420 00:18:16.267 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.267 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:16.267 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.267 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:16.268 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:18:16.268 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.268 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:16.268 Malloc5 00:18:16.268 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.268 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:18:16.268 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.268 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:16.268 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.268 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:18:16.268 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.268 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:16.268 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.268 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.3 -s 4420 00:18:16.268 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.268 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:16.268 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.268 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:16.268 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:18:16.268 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.268 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:16.268 Malloc6 00:18:16.268 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.268 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:18:16.268 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.268 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:16.268 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.268 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:18:16.268 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.268 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:16.268 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.268 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.3 -s 4420 00:18:16.268 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.268 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:16.268 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.268 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:16.268 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:18:16.268 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.268 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:16.529 Malloc7 00:18:16.529 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.529 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:18:16.529 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.529 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:16.529 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.529 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:18:16.529 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.529 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:16.529 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.529 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.3 -s 4420 00:18:16.529 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.529 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:16.529 00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.529 00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:16.529 00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:18:16.529 00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.529 00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:16.529 Malloc8 00:18:16.529 00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.529 00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:18:16.529 00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.529 00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:16.529 00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.529 00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:18:16.529 00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.529 00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:16.529 00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.529 00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.3 -s 4420 00:18:16.529 00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.529 00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:16.529 00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.529 00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:16.529 00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:18:16.529 00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.529 00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:16.529 Malloc9 00:18:16.529 00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.529 00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:18:16.529 00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.529 00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:16.529 00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.529 00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:18:16.529 00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.529 00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:16.529 00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.529 00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.3 -s 4420 00:18:16.529 00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.529 00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:16.529 00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.529 00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:16.529 00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:18:16.529 00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.530 00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:16.789 Malloc10 00:18:16.789 00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.790 00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:18:16.790 00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.790 00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:16.790 00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.790 00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:18:16.790 00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.790 00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:16.790 00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.790 00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.3 -s 4420 00:18:16.790 00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.790 00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:16.790 00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.790 00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:16.790 00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:18:16.790 00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.790 00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:16.790 Malloc11 00:18:16.790 00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.790 00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:18:16.790 00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.790 00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:16.790 00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.790 00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:18:16.790 00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.790 00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:16.790 00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.790 00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.3 -s 4420 00:18:16.790 00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.790 00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:16.790 00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.790 00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:18:16.790 00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:16.790 00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --hostid=2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:18:17.049 00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:18:17.049 00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:18:17.049 00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:17.049 00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:18:17.049 00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:18:18.953 00:02:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:18.953 00:02:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:18.953 00:02:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK1 00:18:18.953 00:02:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:18.953 00:02:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:18.953 00:02:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:18:18.953 00:02:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:18.953 00:02:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --hostid=2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.3 -s 4420 00:18:19.213 00:02:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:18:19.213 00:02:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:18:19.213 00:02:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:19.213 00:02:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:18:19.213 00:02:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:18:21.116 00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:21.116 00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:21.116 00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK2 00:18:21.116 00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:21.116 00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:21.116 00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:18:21.116 00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:21.116 00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --hostid=2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.3 -s 4420 00:18:21.375 00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:18:21.375 00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:18:21.375 00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:21.375 00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:18:21.375 00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:18:23.278 00:02:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:23.278 00:02:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:23.279 00:02:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK3 00:18:23.279 00:02:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:23.279 00:02:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:23.279 00:02:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:18:23.279 00:02:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:23.279 00:02:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --hostid=2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.3 -s 4420 00:18:23.537 00:02:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:18:23.538 00:02:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:18:23.538 00:02:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:23.538 00:02:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:18:23.538 00:02:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:18:25.442 00:02:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:25.442 00:02:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:25.442 00:02:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK4 00:18:25.442 00:02:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:25.442 00:02:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:25.442 00:02:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:18:25.442 00:02:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:25.442 00:02:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --hostid=2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.3 -s 4420 00:18:25.702 00:02:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:18:25.702 00:02:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:18:25.702 00:02:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:25.702 00:02:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:18:25.702 00:02:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:18:27.607 00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:27.607 00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:27.607 00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK5 00:18:27.607 00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:27.607 00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:27.607 00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:18:27.607 00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:27.607 00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --hostid=2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.3 -s 4420 00:18:27.867 00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:18:27.867 00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:18:27.867 00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:27.867 00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:18:27.867 00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:18:29.771 00:02:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:29.771 00:02:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:29.771 00:02:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK6 00:18:29.771 00:02:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:29.771 00:02:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:29.771 00:02:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:18:29.771 00:02:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:29.771 00:02:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --hostid=2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.3 -s 4420 00:18:30.030 00:02:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:18:30.030 00:02:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:18:30.030 00:02:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:30.030 00:02:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:18:30.030 00:02:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:18:31.935 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:31.935 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:31.935 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK7 00:18:31.935 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:31.935 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:31.935 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:18:31.935 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:31.935 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --hostid=2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.3 -s 4420 00:18:32.194 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:18:32.194 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:18:32.194 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:32.194 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:18:32.194 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:18:34.096 00:02:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:34.096 00:02:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:34.096 00:02:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK8 00:18:34.096 00:02:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:34.096 00:02:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:34.096 00:02:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:18:34.096 00:02:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:34.096 00:02:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --hostid=2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.3 -s 4420 00:18:34.355 00:02:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:18:34.355 00:02:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:18:34.355 00:02:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:34.355 00:02:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:18:34.356 00:02:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:18:36.280 00:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:36.280 00:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:36.280 00:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK9 00:18:36.280 00:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:36.280 00:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:36.280 00:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:18:36.280 00:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:36.280 00:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --hostid=2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.3 -s 4420 00:18:36.553 00:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:18:36.553 00:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:18:36.553 00:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:36.553 00:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:18:36.553 00:02:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:18:38.457 00:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:38.457 00:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:38.457 00:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK10 00:18:38.457 00:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:38.457 00:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:38.457 00:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:18:38.457 00:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:38.457 00:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --hostid=2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.3 -s 4420 00:18:38.716 00:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:18:38.716 00:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:18:38.716 00:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:38.716 00:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:18:38.716 00:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:18:40.619 00:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:40.619 00:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:40.619 00:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK11 00:18:40.619 00:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:40.619 00:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:40.619 00:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:18:40.619 00:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:18:40.619 [global] 00:18:40.619 thread=1 00:18:40.619 invalidate=1 00:18:40.619 rw=read 00:18:40.619 time_based=1 00:18:40.619 runtime=10 00:18:40.619 ioengine=libaio 00:18:40.619 direct=1 00:18:40.619 bs=262144 00:18:40.619 iodepth=64 00:18:40.619 norandommap=1 00:18:40.619 numjobs=1 00:18:40.619 00:18:40.619 [job0] 00:18:40.619 filename=/dev/nvme0n1 00:18:40.619 [job1] 00:18:40.619 filename=/dev/nvme10n1 00:18:40.619 [job2] 00:18:40.619 filename=/dev/nvme1n1 00:18:40.619 [job3] 00:18:40.619 filename=/dev/nvme2n1 00:18:40.877 [job4] 00:18:40.877 filename=/dev/nvme3n1 00:18:40.877 [job5] 00:18:40.877 filename=/dev/nvme4n1 00:18:40.877 [job6] 00:18:40.877 filename=/dev/nvme5n1 00:18:40.877 [job7] 00:18:40.877 filename=/dev/nvme6n1 00:18:40.877 [job8] 00:18:40.877 filename=/dev/nvme7n1 00:18:40.877 [job9] 00:18:40.877 filename=/dev/nvme8n1 00:18:40.877 [job10] 00:18:40.877 filename=/dev/nvme9n1 00:18:40.877 Could not set queue depth (nvme0n1) 00:18:40.877 Could not set queue depth (nvme10n1) 00:18:40.877 Could not set queue depth (nvme1n1) 00:18:40.877 Could not set queue depth (nvme2n1) 00:18:40.877 Could not set queue depth (nvme3n1) 00:18:40.877 Could not set queue depth (nvme4n1) 00:18:40.877 Could not set queue depth (nvme5n1) 00:18:40.878 Could not set queue depth (nvme6n1) 00:18:40.878 Could not set queue depth (nvme7n1) 00:18:40.878 Could not set queue depth (nvme8n1) 00:18:40.878 Could not set queue depth (nvme9n1) 00:18:41.136 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:41.136 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:41.136 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:41.137 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:41.137 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:41.137 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:41.137 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:41.137 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:41.137 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:41.137 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:41.137 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:41.137 fio-3.35 00:18:41.137 Starting 11 threads 00:18:53.346 00:18:53.346 job0: (groupid=0, jobs=1): err= 0: pid=77358: Tue Nov 19 00:02:58 2024 00:18:53.346 read: IOPS=72, BW=18.1MiB/s (19.0MB/s)(185MiB/10179msec) 00:18:53.346 slat (usec): min=16, max=433262, avg=13584.70, stdev=44520.35 00:18:53.346 clat (msec): min=24, max=1134, avg=867.40, stdev=197.16 00:18:53.346 lat (msec): min=25, max=1348, avg=880.99, stdev=196.64 00:18:53.346 clat percentiles (msec): 00:18:53.346 | 1.00th=[ 243], 5.00th=[ 498], 10.00th=[ 542], 20.00th=[ 726], 00:18:53.346 | 30.00th=[ 785], 40.00th=[ 852], 50.00th=[ 919], 60.00th=[ 978], 00:18:53.346 | 70.00th=[ 1011], 80.00th=[ 1045], 90.00th=[ 1070], 95.00th=[ 1083], 00:18:53.346 | 99.00th=[ 1099], 99.50th=[ 1133], 99.90th=[ 1133], 99.95th=[ 1133], 00:18:53.346 | 99.99th=[ 1133] 00:18:53.346 bw ( KiB/s): min= 4608, max=29696, per=1.79%, avg=17247.65, stdev=8123.31, samples=20 00:18:53.346 iops : min= 18, max= 116, avg=67.20, stdev=31.75, samples=20 00:18:53.346 lat (msec) : 50=0.41%, 100=0.41%, 250=0.54%, 500=4.74%, 750=20.33% 00:18:53.346 lat (msec) : 1000=42.55%, 2000=31.03% 00:18:53.346 cpu : usr=0.05%, sys=0.35%, ctx=136, majf=0, minf=4097 00:18:53.346 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=2.2%, 32=4.3%, >=64=91.5% 00:18:53.346 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:53.346 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:53.346 issued rwts: total=738,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:53.346 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:53.346 job1: (groupid=0, jobs=1): err= 0: pid=77359: Tue Nov 19 00:02:58 2024 00:18:53.346 read: IOPS=1357, BW=339MiB/s (356MB/s)(3400MiB/10020msec) 00:18:53.346 slat (usec): min=19, max=54752, avg=730.73, stdev=1586.69 00:18:53.346 clat (msec): min=17, max=158, avg=46.33, stdev= 7.39 00:18:53.346 lat (msec): min=19, max=158, avg=47.06, stdev= 7.47 00:18:53.346 clat percentiles (msec): 00:18:53.346 | 1.00th=[ 40], 5.00th=[ 42], 10.00th=[ 43], 20.00th=[ 44], 00:18:53.346 | 30.00th=[ 45], 40.00th=[ 45], 50.00th=[ 46], 60.00th=[ 47], 00:18:53.346 | 70.00th=[ 48], 80.00th=[ 49], 90.00th=[ 51], 95.00th=[ 52], 00:18:53.346 | 99.00th=[ 55], 99.50th=[ 69], 99.90th=[ 157], 99.95th=[ 159], 00:18:53.346 | 99.99th=[ 159] 00:18:53.346 bw ( KiB/s): min=253440, max=361472, per=35.87%, avg=346432.05, stdev=22450.03, samples=20 00:18:53.347 iops : min= 990, max= 1412, avg=1353.20, stdev=87.68, samples=20 00:18:53.347 lat (msec) : 20=0.03%, 50=90.99%, 100=8.52%, 250=0.46% 00:18:53.347 cpu : usr=0.63%, sys=4.56%, ctx=2862, majf=0, minf=4097 00:18:53.347 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:18:53.347 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:53.347 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:53.347 issued rwts: total=13601,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:53.347 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:53.347 job2: (groupid=0, jobs=1): err= 0: pid=77360: Tue Nov 19 00:02:58 2024 00:18:53.347 read: IOPS=81, BW=20.3MiB/s (21.3MB/s)(207MiB/10184msec) 00:18:53.347 slat (usec): min=23, max=392726, avg=12082.41, stdev=37677.39 00:18:53.347 clat (msec): min=18, max=1136, avg=773.99, stdev=181.53 00:18:53.347 lat (msec): min=18, max=1136, avg=786.08, stdev=184.03 00:18:53.347 clat percentiles (msec): 00:18:53.347 | 1.00th=[ 74], 5.00th=[ 388], 10.00th=[ 464], 20.00th=[ 718], 00:18:53.347 | 30.00th=[ 751], 40.00th=[ 785], 50.00th=[ 802], 60.00th=[ 835], 00:18:53.347 | 70.00th=[ 860], 80.00th=[ 902], 90.00th=[ 944], 95.00th=[ 1003], 00:18:53.347 | 99.00th=[ 1053], 99.50th=[ 1133], 99.90th=[ 1133], 99.95th=[ 1133], 00:18:53.347 | 99.99th=[ 1133] 00:18:53.347 bw ( KiB/s): min= 4608, max=32191, per=2.03%, avg=19575.45, stdev=7654.25, samples=20 00:18:53.347 iops : min= 18, max= 125, avg=76.30, stdev=29.88, samples=20 00:18:53.347 lat (msec) : 20=0.12%, 100=1.21%, 250=1.81%, 500=7.85%, 750=18.36% 00:18:53.347 lat (msec) : 1000=66.18%, 2000=4.47% 00:18:53.347 cpu : usr=0.06%, sys=0.39%, ctx=154, majf=0, minf=4097 00:18:53.347 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=1.9%, 32=3.9%, >=64=92.4% 00:18:53.347 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:53.347 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:53.347 issued rwts: total=828,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:53.347 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:53.347 job3: (groupid=0, jobs=1): err= 0: pid=77361: Tue Nov 19 00:02:58 2024 00:18:53.347 read: IOPS=80, BW=20.0MiB/s (21.0MB/s)(204MiB/10184msec) 00:18:53.347 slat (usec): min=20, max=522350, avg=12310.34, stdev=38649.99 00:18:53.347 clat (msec): min=133, max=1075, avg=785.63, stdev=207.49 00:18:53.347 lat (msec): min=185, max=1197, avg=797.94, stdev=209.03 00:18:53.347 clat percentiles (msec): 00:18:53.347 | 1.00th=[ 203], 5.00th=[ 257], 10.00th=[ 498], 20.00th=[ 709], 00:18:53.347 | 30.00th=[ 751], 40.00th=[ 785], 50.00th=[ 844], 60.00th=[ 885], 00:18:53.347 | 70.00th=[ 911], 80.00th=[ 936], 90.00th=[ 969], 95.00th=[ 995], 00:18:53.347 | 99.00th=[ 1053], 99.50th=[ 1053], 99.90th=[ 1083], 99.95th=[ 1083], 00:18:53.347 | 99.99th=[ 1083] 00:18:53.347 bw ( KiB/s): min= 5120, max=32256, per=1.99%, avg=19251.35, stdev=7738.70, samples=20 00:18:53.347 iops : min= 20, max= 126, avg=75.05, stdev=30.28, samples=20 00:18:53.347 lat (msec) : 250=4.53%, 500=6.00%, 750=17.65%, 1000=67.03%, 2000=4.78% 00:18:53.347 cpu : usr=0.06%, sys=0.42%, ctx=135, majf=0, minf=4097 00:18:53.347 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=2.0%, 32=3.9%, >=64=92.3% 00:18:53.347 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:53.347 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:53.347 issued rwts: total=816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:53.347 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:53.347 job4: (groupid=0, jobs=1): err= 0: pid=77362: Tue Nov 19 00:02:58 2024 00:18:53.347 read: IOPS=1329, BW=332MiB/s (348MB/s)(3330MiB/10018msec) 00:18:53.347 slat (usec): min=19, max=12625, avg=746.78, stdev=1777.56 00:18:53.347 clat (usec): min=15911, max=69234, avg=47325.39, stdev=4643.63 00:18:53.347 lat (usec): min=16559, max=69256, avg=48072.17, stdev=4559.13 00:18:53.347 clat percentiles (usec): 00:18:53.347 | 1.00th=[38536], 5.00th=[40633], 10.00th=[41681], 20.00th=[43254], 00:18:53.347 | 30.00th=[44303], 40.00th=[45876], 50.00th=[46924], 60.00th=[48497], 00:18:53.347 | 70.00th=[50070], 80.00th=[51643], 90.00th=[53740], 95.00th=[54789], 00:18:53.347 | 99.00th=[57410], 99.50th=[57934], 99.90th=[62129], 99.95th=[65274], 00:18:53.347 | 99.99th=[69731] 00:18:53.347 bw ( KiB/s): min=327310, max=348487, per=35.12%, avg=339223.50, stdev=5765.28, samples=20 00:18:53.347 iops : min= 1278, max= 1361, avg=1325.00, stdev=22.54, samples=20 00:18:53.347 lat (msec) : 20=0.06%, 50=68.70%, 100=31.24% 00:18:53.347 cpu : usr=0.61%, sys=4.10%, ctx=2146, majf=0, minf=4097 00:18:53.347 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:18:53.347 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:53.347 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:53.347 issued rwts: total=13318,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:53.347 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:53.347 job5: (groupid=0, jobs=1): err= 0: pid=77363: Tue Nov 19 00:02:58 2024 00:18:53.347 read: IOPS=83, BW=20.9MiB/s (21.9MB/s)(213MiB/10186msec) 00:18:53.347 slat (usec): min=17, max=339525, avg=11756.24, stdev=36126.52 00:18:53.347 clat (msec): min=99, max=1106, avg=751.95, stdev=204.82 00:18:53.347 lat (msec): min=99, max=1131, avg=763.71, stdev=207.61 00:18:53.347 clat percentiles (msec): 00:18:53.347 | 1.00th=[ 102], 5.00th=[ 192], 10.00th=[ 443], 20.00th=[ 693], 00:18:53.347 | 30.00th=[ 735], 40.00th=[ 760], 50.00th=[ 793], 60.00th=[ 818], 00:18:53.347 | 70.00th=[ 852], 80.00th=[ 885], 90.00th=[ 927], 95.00th=[ 1003], 00:18:53.347 | 99.00th=[ 1083], 99.50th=[ 1083], 99.90th=[ 1099], 99.95th=[ 1099], 00:18:53.347 | 99.99th=[ 1099] 00:18:53.347 bw ( KiB/s): min= 8704, max=31744, per=2.09%, avg=20165.00, stdev=6038.40, samples=20 00:18:53.347 iops : min= 34, max= 124, avg=78.60, stdev=23.62, samples=20 00:18:53.347 lat (msec) : 100=0.35%, 250=5.05%, 500=5.05%, 750=26.76%, 1000=57.16% 00:18:53.347 lat (msec) : 2000=5.63% 00:18:53.347 cpu : usr=0.07%, sys=0.38%, ctx=158, majf=0, minf=4097 00:18:53.347 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=1.9%, 32=3.8%, >=64=92.6% 00:18:53.347 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:53.347 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:53.347 issued rwts: total=852,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:53.347 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:53.347 job6: (groupid=0, jobs=1): err= 0: pid=77364: Tue Nov 19 00:02:58 2024 00:18:53.347 read: IOPS=221, BW=55.4MiB/s (58.1MB/s)(559MiB/10089msec) 00:18:53.347 slat (usec): min=20, max=97788, avg=4467.72, stdev=11092.09 00:18:53.347 clat (msec): min=24, max=437, avg=283.72, stdev=39.24 00:18:53.347 lat (msec): min=26, max=437, avg=288.19, stdev=39.88 00:18:53.347 clat percentiles (msec): 00:18:53.347 | 1.00th=[ 109], 5.00th=[ 234], 10.00th=[ 253], 20.00th=[ 268], 00:18:53.347 | 30.00th=[ 275], 40.00th=[ 284], 50.00th=[ 288], 60.00th=[ 296], 00:18:53.347 | 70.00th=[ 300], 80.00th=[ 309], 90.00th=[ 317], 95.00th=[ 326], 00:18:53.347 | 99.00th=[ 359], 99.50th=[ 376], 99.90th=[ 388], 99.95th=[ 388], 00:18:53.347 | 99.99th=[ 439] 00:18:53.347 bw ( KiB/s): min=49053, max=60928, per=5.76%, avg=55637.65, stdev=2955.13, samples=20 00:18:53.347 iops : min= 191, max= 238, avg=217.15, stdev=11.58, samples=20 00:18:53.347 lat (msec) : 50=0.04%, 100=0.72%, 250=8.18%, 500=91.06% 00:18:53.347 cpu : usr=0.09%, sys=0.82%, ctx=460, majf=0, minf=4097 00:18:53.347 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.4%, >=64=97.2% 00:18:53.347 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:53.347 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:53.347 issued rwts: total=2237,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:53.347 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:53.347 job7: (groupid=0, jobs=1): err= 0: pid=77365: Tue Nov 19 00:02:58 2024 00:18:53.347 read: IOPS=72, BW=18.1MiB/s (18.9MB/s)(184MiB/10191msec) 00:18:53.347 slat (usec): min=20, max=413402, avg=13665.84, stdev=44464.31 00:18:53.347 clat (msec): min=22, max=1145, avg=871.07, stdev=176.55 00:18:53.347 lat (msec): min=23, max=1318, avg=884.74, stdev=174.93 00:18:53.347 clat percentiles (msec): 00:18:53.348 | 1.00th=[ 292], 5.00th=[ 584], 10.00th=[ 684], 20.00th=[ 743], 00:18:53.348 | 30.00th=[ 785], 40.00th=[ 844], 50.00th=[ 902], 60.00th=[ 953], 00:18:53.348 | 70.00th=[ 978], 80.00th=[ 1028], 90.00th=[ 1045], 95.00th=[ 1083], 00:18:53.348 | 99.00th=[ 1133], 99.50th=[ 1150], 99.90th=[ 1150], 99.95th=[ 1150], 00:18:53.348 | 99.99th=[ 1150] 00:18:53.348 bw ( KiB/s): min= 4087, max=30720, per=1.78%, avg=17198.75, stdev=8863.83, samples=20 00:18:53.348 iops : min= 15, max= 120, avg=67.00, stdev=34.74, samples=20 00:18:53.348 lat (msec) : 50=0.54%, 100=0.27%, 250=0.14%, 500=2.58%, 750=16.85% 00:18:53.348 lat (msec) : 1000=52.72%, 2000=26.90% 00:18:53.348 cpu : usr=0.03%, sys=0.31%, ctx=134, majf=0, minf=4097 00:18:53.348 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=2.2%, 32=4.3%, >=64=91.4% 00:18:53.348 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:53.348 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:53.348 issued rwts: total=736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:53.348 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:53.348 job8: (groupid=0, jobs=1): err= 0: pid=77366: Tue Nov 19 00:02:58 2024 00:18:53.348 read: IOPS=88, BW=22.0MiB/s (23.1MB/s)(225MiB/10189msec) 00:18:53.348 slat (usec): min=20, max=323897, avg=10901.95, stdev=33193.65 00:18:53.348 clat (msec): min=16, max=1048, avg=714.04, stdev=203.64 00:18:53.348 lat (msec): min=18, max=1048, avg=724.94, stdev=207.27 00:18:53.348 clat percentiles (msec): 00:18:53.348 | 1.00th=[ 38], 5.00th=[ 226], 10.00th=[ 292], 20.00th=[ 701], 00:18:53.348 | 30.00th=[ 735], 40.00th=[ 751], 50.00th=[ 768], 60.00th=[ 785], 00:18:53.348 | 70.00th=[ 810], 80.00th=[ 835], 90.00th=[ 877], 95.00th=[ 902], 00:18:53.348 | 99.00th=[ 953], 99.50th=[ 961], 99.90th=[ 1045], 99.95th=[ 1045], 00:18:53.348 | 99.99th=[ 1045] 00:18:53.348 bw ( KiB/s): min=11776, max=37888, per=2.21%, avg=21341.65, stdev=5463.27, samples=20 00:18:53.348 iops : min= 46, max= 148, avg=83.20, stdev=21.33, samples=20 00:18:53.348 lat (msec) : 20=0.22%, 50=1.78%, 100=0.78%, 250=4.57%, 500=4.90% 00:18:53.348 lat (msec) : 750=26.28%, 1000=61.14%, 2000=0.33% 00:18:53.348 cpu : usr=0.04%, sys=0.36%, ctx=198, majf=0, minf=4097 00:18:53.348 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.8%, 32=3.6%, >=64=93.0% 00:18:53.348 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:53.348 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:53.348 issued rwts: total=898,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:53.348 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:53.348 job9: (groupid=0, jobs=1): err= 0: pid=77367: Tue Nov 19 00:02:58 2024 00:18:53.348 read: IOPS=223, BW=55.8MiB/s (58.6MB/s)(564MiB/10091msec) 00:18:53.348 slat (usec): min=16, max=172587, avg=4400.86, stdev=11147.13 00:18:53.348 clat (msec): min=17, max=390, avg=281.70, stdev=52.60 00:18:53.348 lat (msec): min=17, max=391, avg=286.10, stdev=53.19 00:18:53.348 clat percentiles (msec): 00:18:53.348 | 1.00th=[ 75], 5.00th=[ 159], 10.00th=[ 245], 20.00th=[ 268], 00:18:53.348 | 30.00th=[ 279], 40.00th=[ 284], 50.00th=[ 292], 60.00th=[ 296], 00:18:53.348 | 70.00th=[ 305], 80.00th=[ 313], 90.00th=[ 321], 95.00th=[ 338], 00:18:53.348 | 99.00th=[ 363], 99.50th=[ 376], 99.90th=[ 393], 99.95th=[ 393], 00:18:53.348 | 99.99th=[ 393] 00:18:53.348 bw ( KiB/s): min=49564, max=72192, per=5.81%, avg=56067.80, stdev=4544.01, samples=20 00:18:53.348 iops : min= 193, max= 282, avg=218.90, stdev=17.81, samples=20 00:18:53.348 lat (msec) : 20=0.09%, 50=0.67%, 100=2.57%, 250=8.30%, 500=88.38% 00:18:53.348 cpu : usr=0.08%, sys=0.85%, ctx=490, majf=0, minf=4097 00:18:53.348 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.4%, >=64=97.2% 00:18:53.348 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:53.348 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:53.348 issued rwts: total=2254,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:53.348 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:53.348 job10: (groupid=0, jobs=1): err= 0: pid=77368: Tue Nov 19 00:02:58 2024 00:18:53.348 read: IOPS=214, BW=53.7MiB/s (56.3MB/s)(542MiB/10088msec) 00:18:53.348 slat (usec): min=19, max=185154, avg=4617.98, stdev=11713.63 00:18:53.348 clat (msec): min=33, max=402, avg=292.78, stdev=37.07 00:18:53.348 lat (msec): min=33, max=463, avg=297.40, stdev=37.08 00:18:53.348 clat percentiles (msec): 00:18:53.348 | 1.00th=[ 190], 5.00th=[ 234], 10.00th=[ 253], 20.00th=[ 271], 00:18:53.348 | 30.00th=[ 279], 40.00th=[ 288], 50.00th=[ 296], 60.00th=[ 300], 00:18:53.348 | 70.00th=[ 309], 80.00th=[ 321], 90.00th=[ 334], 95.00th=[ 347], 00:18:53.348 | 99.00th=[ 376], 99.50th=[ 376], 99.90th=[ 397], 99.95th=[ 401], 00:18:53.348 | 99.99th=[ 401] 00:18:53.348 bw ( KiB/s): min=34885, max=63361, per=5.58%, avg=53848.65, stdev=5390.82, samples=20 00:18:53.348 iops : min= 136, max= 247, avg=210.15, stdev=21.03, samples=20 00:18:53.348 lat (msec) : 50=0.18%, 100=0.05%, 250=8.44%, 500=91.33% 00:18:53.348 cpu : usr=0.12%, sys=0.73%, ctx=481, majf=0, minf=4097 00:18:53.348 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.5%, >=64=97.1% 00:18:53.348 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:53.348 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:53.348 issued rwts: total=2168,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:53.348 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:53.348 00:18:53.348 Run status group 0 (all jobs): 00:18:53.348 READ: bw=943MiB/s (989MB/s), 18.1MiB/s-339MiB/s (18.9MB/s-356MB/s), io=9612MiB (10.1GB), run=10018-10191msec 00:18:53.348 00:18:53.348 Disk stats (read/write): 00:18:53.348 nvme0n1: ios=1352/0, merge=0/0, ticks=1165844/0, in_queue=1165844, util=97.80% 00:18:53.348 nvme10n1: ios=27108/0, merge=0/0, ticks=1240731/0, in_queue=1240731, util=97.91% 00:18:53.348 nvme1n1: ios=1532/0, merge=0/0, ticks=1194414/0, in_queue=1194414, util=98.19% 00:18:53.348 nvme2n1: ios=1505/0, merge=0/0, ticks=1192288/0, in_queue=1192288, util=98.22% 00:18:53.348 nvme3n1: ios=26544/0, merge=0/0, ticks=1241376/0, in_queue=1241376, util=98.26% 00:18:53.348 nvme4n1: ios=1580/0, merge=0/0, ticks=1195539/0, in_queue=1195539, util=98.42% 00:18:53.348 nvme5n1: ios=4354/0, merge=0/0, ticks=1225548/0, in_queue=1225548, util=98.47% 00:18:53.348 nvme6n1: ios=1344/0, merge=0/0, ticks=1186095/0, in_queue=1186095, util=98.66% 00:18:53.348 nvme7n1: ios=1670/0, merge=0/0, ticks=1202246/0, in_queue=1202246, util=98.92% 00:18:53.348 nvme8n1: ios=4394/0, merge=0/0, ticks=1229333/0, in_queue=1229333, util=99.02% 00:18:53.348 nvme9n1: ios=4214/0, merge=0/0, ticks=1229442/0, in_queue=1229442, util=99.05% 00:18:53.348 00:02:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:18:53.348 [global] 00:18:53.348 thread=1 00:18:53.348 invalidate=1 00:18:53.348 rw=randwrite 00:18:53.348 time_based=1 00:18:53.348 runtime=10 00:18:53.348 ioengine=libaio 00:18:53.348 direct=1 00:18:53.348 bs=262144 00:18:53.348 iodepth=64 00:18:53.348 norandommap=1 00:18:53.348 numjobs=1 00:18:53.348 00:18:53.348 [job0] 00:18:53.348 filename=/dev/nvme0n1 00:18:53.348 [job1] 00:18:53.348 filename=/dev/nvme10n1 00:18:53.348 [job2] 00:18:53.348 filename=/dev/nvme1n1 00:18:53.348 [job3] 00:18:53.348 filename=/dev/nvme2n1 00:18:53.348 [job4] 00:18:53.348 filename=/dev/nvme3n1 00:18:53.348 [job5] 00:18:53.348 filename=/dev/nvme4n1 00:18:53.348 [job6] 00:18:53.348 filename=/dev/nvme5n1 00:18:53.348 [job7] 00:18:53.348 filename=/dev/nvme6n1 00:18:53.348 [job8] 00:18:53.348 filename=/dev/nvme7n1 00:18:53.348 [job9] 00:18:53.348 filename=/dev/nvme8n1 00:18:53.348 [job10] 00:18:53.348 filename=/dev/nvme9n1 00:18:53.348 Could not set queue depth (nvme0n1) 00:18:53.348 Could not set queue depth (nvme10n1) 00:18:53.348 Could not set queue depth (nvme1n1) 00:18:53.348 Could not set queue depth (nvme2n1) 00:18:53.348 Could not set queue depth (nvme3n1) 00:18:53.348 Could not set queue depth (nvme4n1) 00:18:53.348 Could not set queue depth (nvme5n1) 00:18:53.348 Could not set queue depth (nvme6n1) 00:18:53.348 Could not set queue depth (nvme7n1) 00:18:53.348 Could not set queue depth (nvme8n1) 00:18:53.349 Could not set queue depth (nvme9n1) 00:18:53.349 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:53.349 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:53.349 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:53.349 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:53.349 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:53.349 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:53.349 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:53.349 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:53.349 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:53.349 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:53.349 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:53.349 fio-3.35 00:18:53.349 Starting 11 threads 00:19:03.331 00:19:03.331 job0: (groupid=0, jobs=1): err= 0: pid=77569: Tue Nov 19 00:03:08 2024 00:19:03.331 write: IOPS=162, BW=40.6MiB/s (42.6MB/s)(415MiB/10225msec); 0 zone resets 00:19:03.331 slat (usec): min=21, max=235018, avg=5922.98, stdev=13023.65 00:19:03.331 clat (msec): min=31, max=726, avg=387.86, stdev=92.47 00:19:03.331 lat (msec): min=31, max=726, avg=393.78, stdev=93.26 00:19:03.331 clat percentiles (msec): 00:19:03.331 | 1.00th=[ 77], 5.00th=[ 268], 10.00th=[ 292], 20.00th=[ 309], 00:19:03.331 | 30.00th=[ 338], 40.00th=[ 388], 50.00th=[ 409], 60.00th=[ 422], 00:19:03.331 | 70.00th=[ 430], 80.00th=[ 443], 90.00th=[ 493], 95.00th=[ 518], 00:19:03.331 | 99.00th=[ 609], 99.50th=[ 609], 99.90th=[ 726], 99.95th=[ 726], 00:19:03.331 | 99.99th=[ 726] 00:19:03.331 bw ( KiB/s): min=22528, max=57344, per=4.53%, avg=40883.20, stdev=8259.55, samples=20 00:19:03.331 iops : min= 88, max= 224, avg=159.70, stdev=32.26, samples=20 00:19:03.331 lat (msec) : 50=0.72%, 100=0.72%, 250=2.95%, 500=88.44%, 750=7.16% 00:19:03.331 cpu : usr=0.27%, sys=0.62%, ctx=1653, majf=0, minf=1 00:19:03.331 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=1.0%, 32=1.9%, >=64=96.2% 00:19:03.331 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:03.331 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:03.332 issued rwts: total=0,1661,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:03.332 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:03.332 job1: (groupid=0, jobs=1): err= 0: pid=77570: Tue Nov 19 00:03:08 2024 00:19:03.332 write: IOPS=768, BW=192MiB/s (202MB/s)(1947MiB/10127msec); 0 zone resets 00:19:03.332 slat (usec): min=13, max=34986, avg=1278.82, stdev=2390.75 00:19:03.332 clat (msec): min=20, max=287, avg=81.92, stdev=32.01 00:19:03.332 lat (msec): min=20, max=287, avg=83.20, stdev=32.40 00:19:03.332 clat percentiles (msec): 00:19:03.332 | 1.00th=[ 65], 5.00th=[ 66], 10.00th=[ 66], 20.00th=[ 67], 00:19:03.332 | 30.00th=[ 69], 40.00th=[ 70], 50.00th=[ 70], 60.00th=[ 71], 00:19:03.332 | 70.00th=[ 71], 80.00th=[ 72], 90.00th=[ 153], 95.00th=[ 161], 00:19:03.332 | 99.00th=[ 167], 99.50th=[ 180], 99.90th=[ 268], 99.95th=[ 279], 00:19:03.332 | 99.99th=[ 288] 00:19:03.332 bw ( KiB/s): min=98816, max=237568, per=21.91%, avg=197710.80, stdev=57432.00, samples=20 00:19:03.332 iops : min= 386, max= 928, avg=772.30, stdev=224.34, samples=20 00:19:03.332 lat (msec) : 50=0.21%, 100=85.21%, 250=14.41%, 500=0.18% 00:19:03.332 cpu : usr=1.27%, sys=2.23%, ctx=9058, majf=0, minf=1 00:19:03.332 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:19:03.332 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:03.332 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:03.332 issued rwts: total=0,7787,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:03.332 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:03.332 job2: (groupid=0, jobs=1): err= 0: pid=77582: Tue Nov 19 00:03:08 2024 00:19:03.332 write: IOPS=168, BW=42.2MiB/s (44.3MB/s)(431MiB/10212msec); 0 zone resets 00:19:03.332 slat (usec): min=19, max=202248, avg=5567.65, stdev=11638.53 00:19:03.332 clat (msec): min=4, max=522, avg=373.04, stdev=71.32 00:19:03.332 lat (msec): min=4, max=522, avg=378.61, stdev=72.01 00:19:03.332 clat percentiles (msec): 00:19:03.332 | 1.00th=[ 142], 5.00th=[ 239], 10.00th=[ 284], 20.00th=[ 309], 00:19:03.332 | 30.00th=[ 334], 40.00th=[ 384], 50.00th=[ 401], 60.00th=[ 409], 00:19:03.332 | 70.00th=[ 422], 80.00th=[ 430], 90.00th=[ 439], 95.00th=[ 447], 00:19:03.332 | 99.00th=[ 481], 99.50th=[ 506], 99.90th=[ 523], 99.95th=[ 523], 00:19:03.332 | 99.99th=[ 523] 00:19:03.332 bw ( KiB/s): min=29184, max=61440, per=4.72%, avg=42543.10, stdev=7839.86, samples=20 00:19:03.332 iops : min= 114, max= 240, avg=166.15, stdev=30.63, samples=20 00:19:03.332 lat (msec) : 10=0.06%, 50=0.23%, 250=5.51%, 500=93.51%, 750=0.70% 00:19:03.332 cpu : usr=0.41%, sys=0.50%, ctx=1674, majf=0, minf=1 00:19:03.332 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=0.9%, 32=1.9%, >=64=96.3% 00:19:03.332 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:03.332 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:03.332 issued rwts: total=0,1725,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:03.332 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:03.332 job3: (groupid=0, jobs=1): err= 0: pid=77583: Tue Nov 19 00:03:08 2024 00:19:03.332 write: IOPS=183, BW=46.0MiB/s (48.2MB/s)(470MiB/10228msec); 0 zone resets 00:19:03.332 slat (usec): min=18, max=57011, avg=5253.86, stdev=9849.49 00:19:03.332 clat (msec): min=20, max=514, avg=342.75, stdev=82.64 00:19:03.332 lat (msec): min=20, max=514, avg=348.00, stdev=83.49 00:19:03.332 clat percentiles (msec): 00:19:03.332 | 1.00th=[ 52], 5.00th=[ 124], 10.00th=[ 241], 20.00th=[ 296], 00:19:03.332 | 30.00th=[ 321], 40.00th=[ 359], 50.00th=[ 376], 60.00th=[ 384], 00:19:03.332 | 70.00th=[ 393], 80.00th=[ 397], 90.00th=[ 409], 95.00th=[ 422], 00:19:03.332 | 99.00th=[ 435], 99.50th=[ 481], 99.90th=[ 514], 99.95th=[ 514], 00:19:03.332 | 99.99th=[ 514] 00:19:03.332 bw ( KiB/s): min=36864, max=88576, per=5.15%, avg=46485.30, stdev=11207.21, samples=20 00:19:03.332 iops : min= 144, max= 346, avg=181.55, stdev=43.79, samples=20 00:19:03.332 lat (msec) : 50=0.85%, 100=2.07%, 250=7.18%, 500=89.79%, 750=0.11% 00:19:03.332 cpu : usr=0.32%, sys=0.51%, ctx=2234, majf=0, minf=1 00:19:03.332 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.9%, 32=1.7%, >=64=96.6% 00:19:03.332 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:03.332 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:03.332 issued rwts: total=0,1880,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:03.332 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:03.332 job4: (groupid=0, jobs=1): err= 0: pid=77584: Tue Nov 19 00:03:08 2024 00:19:03.332 write: IOPS=215, BW=53.8MiB/s (56.4MB/s)(549MiB/10208msec); 0 zone resets 00:19:03.332 slat (usec): min=14, max=28754, avg=4550.80, stdev=8005.12 00:19:03.332 clat (msec): min=30, max=500, avg=292.69, stdev=37.16 00:19:03.332 lat (msec): min=30, max=500, avg=297.24, stdev=36.91 00:19:03.332 clat percentiles (msec): 00:19:03.332 | 1.00th=[ 107], 5.00th=[ 271], 10.00th=[ 275], 20.00th=[ 279], 00:19:03.332 | 30.00th=[ 288], 40.00th=[ 292], 50.00th=[ 296], 60.00th=[ 296], 00:19:03.332 | 70.00th=[ 300], 80.00th=[ 305], 90.00th=[ 317], 95.00th=[ 330], 00:19:03.332 | 99.00th=[ 418], 99.50th=[ 451], 99.90th=[ 485], 99.95th=[ 502], 00:19:03.332 | 99.99th=[ 502] 00:19:03.332 bw ( KiB/s): min=47616, max=57344, per=6.05%, avg=54624.65, stdev=2389.59, samples=20 00:19:03.332 iops : min= 186, max= 224, avg=213.35, stdev= 9.30, samples=20 00:19:03.332 lat (msec) : 50=0.36%, 100=0.55%, 250=2.50%, 500=96.50%, 750=0.09% 00:19:03.332 cpu : usr=0.39%, sys=0.57%, ctx=2938, majf=0, minf=1 00:19:03.332 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.5%, >=64=97.1% 00:19:03.332 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:03.332 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:03.332 issued rwts: total=0,2197,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:03.332 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:03.332 job5: (groupid=0, jobs=1): err= 0: pid=77585: Tue Nov 19 00:03:08 2024 00:19:03.332 write: IOPS=174, BW=43.7MiB/s (45.9MB/s)(447MiB/10220msec); 0 zone resets 00:19:03.332 slat (usec): min=16, max=65931, avg=5589.66, stdev=10174.15 00:19:03.332 clat (msec): min=69, max=514, avg=360.05, stdev=57.32 00:19:03.332 lat (msec): min=69, max=514, avg=365.64, stdev=57.39 00:19:03.332 clat percentiles (msec): 00:19:03.332 | 1.00th=[ 120], 5.00th=[ 275], 10.00th=[ 292], 20.00th=[ 313], 00:19:03.332 | 30.00th=[ 351], 40.00th=[ 368], 50.00th=[ 380], 60.00th=[ 384], 00:19:03.332 | 70.00th=[ 393], 80.00th=[ 401], 90.00th=[ 414], 95.00th=[ 422], 00:19:03.332 | 99.00th=[ 447], 99.50th=[ 481], 99.90th=[ 514], 99.95th=[ 514], 00:19:03.332 | 99.99th=[ 514] 00:19:03.332 bw ( KiB/s): min=38912, max=55296, per=4.89%, avg=44164.30, stdev=5086.23, samples=20 00:19:03.332 iops : min= 152, max= 216, avg=172.50, stdev=19.87, samples=20 00:19:03.332 lat (msec) : 100=0.67%, 250=2.40%, 500=96.81%, 750=0.11% 00:19:03.332 cpu : usr=0.30%, sys=0.58%, ctx=1910, majf=0, minf=1 00:19:03.332 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.9%, 32=1.8%, >=64=96.5% 00:19:03.332 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:03.332 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:03.332 issued rwts: total=0,1788,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:03.332 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:03.332 job6: (groupid=0, jobs=1): err= 0: pid=77586: Tue Nov 19 00:03:08 2024 00:19:03.332 write: IOPS=212, BW=53.2MiB/s (55.8MB/s)(543MiB/10205msec); 0 zone resets 00:19:03.332 slat (usec): min=19, max=88160, avg=4597.36, stdev=8240.33 00:19:03.332 clat (msec): min=91, max=505, avg=295.94, stdev=31.19 00:19:03.332 lat (msec): min=91, max=505, avg=300.54, stdev=30.60 00:19:03.332 clat percentiles (msec): 00:19:03.332 | 1.00th=[ 176], 5.00th=[ 271], 10.00th=[ 275], 20.00th=[ 284], 00:19:03.332 | 30.00th=[ 292], 40.00th=[ 292], 50.00th=[ 296], 60.00th=[ 296], 00:19:03.332 | 70.00th=[ 300], 80.00th=[ 305], 90.00th=[ 321], 95.00th=[ 330], 00:19:03.332 | 99.00th=[ 422], 99.50th=[ 456], 99.90th=[ 489], 99.95th=[ 506], 00:19:03.332 | 99.99th=[ 506] 00:19:03.332 bw ( KiB/s): min=47104, max=57344, per=5.98%, avg=53984.85, stdev=2734.31, samples=20 00:19:03.332 iops : min= 184, max= 224, avg=210.85, stdev=10.67, samples=20 00:19:03.332 lat (msec) : 100=0.18%, 250=2.39%, 500=97.33%, 750=0.09% 00:19:03.332 cpu : usr=0.45%, sys=0.62%, ctx=2279, majf=0, minf=1 00:19:03.333 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.5%, >=64=97.1% 00:19:03.333 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:03.333 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:03.333 issued rwts: total=0,2172,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:03.333 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:03.333 job7: (groupid=0, jobs=1): err= 0: pid=77587: Tue Nov 19 00:03:08 2024 00:19:03.333 write: IOPS=270, BW=67.6MiB/s (70.9MB/s)(684MiB/10126msec); 0 zone resets 00:19:03.333 slat (usec): min=20, max=31977, avg=3529.31, stdev=6739.72 00:19:03.333 clat (msec): min=7, max=360, avg=233.16, stdev=76.49 00:19:03.333 lat (msec): min=9, max=361, avg=236.68, stdev=77.57 00:19:03.333 clat percentiles (msec): 00:19:03.333 | 1.00th=[ 27], 5.00th=[ 92], 10.00th=[ 148], 20.00th=[ 157], 00:19:03.333 | 30.00th=[ 163], 40.00th=[ 259], 50.00th=[ 279], 60.00th=[ 288], 00:19:03.333 | 70.00th=[ 292], 80.00th=[ 296], 90.00th=[ 300], 95.00th=[ 305], 00:19:03.333 | 99.00th=[ 313], 99.50th=[ 334], 99.90th=[ 355], 99.95th=[ 355], 00:19:03.333 | 99.99th=[ 359] 00:19:03.333 bw ( KiB/s): min=53248, max=129536, per=7.59%, avg=68448.85, stdev=23453.79, samples=20 00:19:03.333 iops : min= 208, max= 506, avg=267.35, stdev=91.63, samples=20 00:19:03.333 lat (msec) : 10=0.07%, 20=0.51%, 50=2.12%, 100=2.78%, 250=34.02% 00:19:03.333 lat (msec) : 500=60.50% 00:19:03.333 cpu : usr=0.61%, sys=0.82%, ctx=2624, majf=0, minf=1 00:19:03.333 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.7% 00:19:03.333 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:03.333 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:03.333 issued rwts: total=0,2737,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:03.333 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:03.333 job8: (groupid=0, jobs=1): err= 0: pid=77588: Tue Nov 19 00:03:08 2024 00:19:03.333 write: IOPS=170, BW=42.7MiB/s (44.7MB/s)(436MiB/10220msec); 0 zone resets 00:19:03.333 slat (usec): min=18, max=135447, avg=5734.67, stdev=10883.40 00:19:03.333 clat (msec): min=137, max=511, avg=369.12, stdev=54.06 00:19:03.333 lat (msec): min=137, max=511, avg=374.86, stdev=53.88 00:19:03.333 clat percentiles (msec): 00:19:03.333 | 1.00th=[ 203], 5.00th=[ 284], 10.00th=[ 292], 20.00th=[ 313], 00:19:03.333 | 30.00th=[ 355], 40.00th=[ 376], 50.00th=[ 384], 60.00th=[ 393], 00:19:03.333 | 70.00th=[ 401], 80.00th=[ 414], 90.00th=[ 430], 95.00th=[ 435], 00:19:03.333 | 99.00th=[ 451], 99.50th=[ 477], 99.90th=[ 510], 99.95th=[ 510], 00:19:03.333 | 99.99th=[ 510] 00:19:03.333 bw ( KiB/s): min=36864, max=55296, per=4.77%, avg=43007.15, stdev=5671.33, samples=20 00:19:03.333 iops : min= 144, max= 216, avg=167.95, stdev=22.16, samples=20 00:19:03.333 lat (msec) : 250=2.12%, 500=97.76%, 750=0.11% 00:19:03.333 cpu : usr=0.41%, sys=0.43%, ctx=1912, majf=0, minf=1 00:19:03.333 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=0.9%, 32=1.8%, >=64=96.4% 00:19:03.333 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:03.333 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:03.333 issued rwts: total=0,1744,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:03.333 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:03.333 job9: (groupid=0, jobs=1): err= 0: pid=77589: Tue Nov 19 00:03:08 2024 00:19:03.333 write: IOPS=1014, BW=254MiB/s (266MB/s)(2552MiB/10063msec); 0 zone resets 00:19:03.333 slat (usec): min=13, max=7618, avg=970.41, stdev=1629.73 00:19:03.333 clat (msec): min=9, max=126, avg=62.10, stdev= 4.56 00:19:03.333 lat (msec): min=9, max=126, avg=63.07, stdev= 4.35 00:19:03.333 clat percentiles (msec): 00:19:03.333 | 1.00th=[ 57], 5.00th=[ 58], 10.00th=[ 59], 20.00th=[ 60], 00:19:03.333 | 30.00th=[ 61], 40.00th=[ 62], 50.00th=[ 63], 60.00th=[ 63], 00:19:03.333 | 70.00th=[ 64], 80.00th=[ 64], 90.00th=[ 65], 95.00th=[ 66], 00:19:03.333 | 99.00th=[ 78], 99.50th=[ 87], 99.90th=[ 115], 99.95th=[ 124], 00:19:03.333 | 99.99th=[ 128] 00:19:03.333 bw ( KiB/s): min=230400, max=271872, per=28.79%, avg=259737.95, stdev=8511.00, samples=20 00:19:03.333 iops : min= 900, max= 1062, avg=1014.60, stdev=33.25, samples=20 00:19:03.333 lat (msec) : 10=0.04%, 20=0.04%, 50=0.20%, 100=99.47%, 250=0.25% 00:19:03.333 cpu : usr=1.34%, sys=2.76%, ctx=12736, majf=0, minf=1 00:19:03.333 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:19:03.333 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:03.333 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:03.333 issued rwts: total=0,10208,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:03.333 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:03.333 job10: (groupid=0, jobs=1): err= 0: pid=77590: Tue Nov 19 00:03:08 2024 00:19:03.333 write: IOPS=210, BW=52.6MiB/s (55.2MB/s)(537MiB/10207msec); 0 zone resets 00:19:03.333 slat (usec): min=19, max=164654, avg=4648.52, stdev=8790.61 00:19:03.333 clat (msec): min=167, max=504, avg=299.17, stdev=27.46 00:19:03.333 lat (msec): min=167, max=504, avg=303.82, stdev=26.52 00:19:03.333 clat percentiles (msec): 00:19:03.333 | 1.00th=[ 224], 5.00th=[ 275], 10.00th=[ 279], 20.00th=[ 288], 00:19:03.333 | 30.00th=[ 292], 40.00th=[ 296], 50.00th=[ 296], 60.00th=[ 300], 00:19:03.333 | 70.00th=[ 300], 80.00th=[ 309], 90.00th=[ 321], 95.00th=[ 338], 00:19:03.333 | 99.00th=[ 422], 99.50th=[ 456], 99.90th=[ 489], 99.95th=[ 506], 00:19:03.333 | 99.99th=[ 506] 00:19:03.333 bw ( KiB/s): min=38989, max=57344, per=5.92%, avg=53405.45, stdev=4041.25, samples=20 00:19:03.333 iops : min= 152, max= 224, avg=208.60, stdev=15.84, samples=20 00:19:03.333 lat (msec) : 250=1.81%, 500=98.09%, 750=0.09% 00:19:03.333 cpu : usr=0.37%, sys=0.69%, ctx=2292, majf=0, minf=1 00:19:03.333 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.5%, >=64=97.1% 00:19:03.333 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:03.333 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:03.333 issued rwts: total=0,2149,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:03.333 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:03.333 00:19:03.333 Run status group 0 (all jobs): 00:19:03.333 WRITE: bw=881MiB/s (924MB/s), 40.6MiB/s-254MiB/s (42.6MB/s-266MB/s), io=9012MiB (9450MB), run=10063-10228msec 00:19:03.333 00:19:03.333 Disk stats (read/write): 00:19:03.333 nvme0n1: ios=49/3192, merge=0/0, ticks=59/1202878, in_queue=1202937, util=97.93% 00:19:03.333 nvme10n1: ios=49/15443, merge=0/0, ticks=69/1211194, in_queue=1211263, util=98.17% 00:19:03.333 nvme1n1: ios=46/3326, merge=0/0, ticks=61/1205257, in_queue=1205318, util=98.22% 00:19:03.333 nvme2n1: ios=30/3632, merge=0/0, ticks=22/1205452, in_queue=1205474, util=98.18% 00:19:03.333 nvme3n1: ios=28/4266, merge=0/0, ticks=114/1204170, in_queue=1204284, util=98.42% 00:19:03.333 nvme4n1: ios=0/3448, merge=0/0, ticks=0/1203839, in_queue=1203839, util=98.25% 00:19:03.333 nvme5n1: ios=0/4218, merge=0/0, ticks=0/1203812, in_queue=1203812, util=98.36% 00:19:03.333 nvme6n1: ios=0/5348, merge=0/0, ticks=0/1214899, in_queue=1214899, util=98.52% 00:19:03.333 nvme7n1: ios=0/3357, merge=0/0, ticks=0/1203466, in_queue=1203466, util=98.69% 00:19:03.333 nvme8n1: ios=0/20259, merge=0/0, ticks=0/1215456, in_queue=1215456, util=98.79% 00:19:03.333 nvme9n1: ios=0/4171, merge=0/0, ticks=0/1204329, in_queue=1204329, util=98.88% 00:19:03.333 00:03:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:19:03.333 00:03:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:19:03.333 00:03:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:03.333 00:03:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:03.333 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:03.333 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:19:03.333 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:19:03.333 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:19:03.333 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK1 00:19:03.333 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:19:03.333 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK1 00:19:03.333 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:19:03.333 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:03.333 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.334 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:03.334 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.334 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:03.334 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:19:03.334 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:19:03.334 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:19:03.334 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:19:03.334 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:19:03.334 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK2 00:19:03.334 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK2 00:19:03.334 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:19:03.334 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:19:03.334 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:19:03.334 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.334 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:03.334 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.334 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:03.334 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:19:03.334 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:19:03.334 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:19:03.334 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:19:03.334 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:19:03.334 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK3 00:19:03.334 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:19:03.334 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK3 00:19:03.334 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:19:03.334 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:19:03.334 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.334 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:03.334 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.334 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:03.334 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:19:03.334 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:19:03.334 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:19:03.334 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:19:03.334 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:19:03.334 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK4 00:19:03.334 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:19:03.334 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK4 00:19:03.334 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:19:03.334 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:19:03.334 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.334 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:03.334 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.334 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:03.334 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:19:03.334 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:19:03.334 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:19:03.334 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:19:03.334 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:19:03.334 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK5 00:19:03.334 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK5 00:19:03.334 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:19:03.334 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:19:03.334 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:19:03.334 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.334 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:03.334 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.334 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:03.334 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:19:03.334 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:19:03.334 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:19:03.334 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:19:03.334 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:19:03.334 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK6 00:19:03.334 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK6 00:19:03.334 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:19:03.334 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:19:03.334 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:19:03.334 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.334 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:03.334 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.334 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:03.334 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:19:03.334 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:19:03.334 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:19:03.334 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:19:03.334 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:19:03.334 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK7 00:19:03.334 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:19:03.334 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK7 00:19:03.334 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:19:03.334 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:19:03.335 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.335 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:03.335 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.335 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:03.335 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:19:03.335 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:19:03.335 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:19:03.335 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:19:03.335 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:19:03.335 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK8 00:19:03.335 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:19:03.335 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK8 00:19:03.335 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:19:03.335 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:19:03.335 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.335 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:03.335 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.335 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:03.335 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:19:03.335 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:19:03.335 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:19:03.335 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:19:03.335 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:19:03.335 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK9 00:19:03.335 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:19:03.335 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK9 00:19:03.335 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:19:03.335 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:19:03.335 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.335 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:03.335 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.335 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:03.335 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:19:03.335 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:19:03.335 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:19:03.335 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:19:03.335 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:19:03.335 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK10 00:19:03.335 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK10 00:19:03.335 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:19:03.335 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:19:03.335 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:19:03.335 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.335 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:03.335 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.335 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:03.335 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:19:03.595 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:19:03.595 00:03:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:19:03.595 00:03:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:19:03.595 00:03:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:19:03.595 00:03:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK11 00:19:03.595 00:03:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:19:03.595 00:03:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK11 00:19:03.595 00:03:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:19:03.595 00:03:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:19:03.595 00:03:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.595 00:03:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:03.595 00:03:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.595 00:03:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:19:03.595 00:03:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:19:03.595 00:03:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:19:03.595 00:03:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:03.595 00:03:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # sync 00:19:03.595 00:03:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:03.595 00:03:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set +e 00:19:03.595 00:03:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:03.595 00:03:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:03.595 rmmod nvme_tcp 00:19:03.595 rmmod nvme_fabrics 00:19:03.595 rmmod nvme_keyring 00:19:03.595 00:03:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:03.595 00:03:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@128 -- # set -e 00:19:03.595 00:03:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@129 -- # return 0 00:19:03.595 00:03:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@517 -- # '[' -n 76898 ']' 00:19:03.595 00:03:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@518 -- # killprocess 76898 00:19:03.595 00:03:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # '[' -z 76898 ']' 00:19:03.595 00:03:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@958 -- # kill -0 76898 00:19:03.595 00:03:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # uname 00:19:03.595 00:03:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:03.595 00:03:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76898 00:19:03.595 killing process with pid 76898 00:19:03.595 00:03:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:03.595 00:03:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:03.595 00:03:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76898' 00:19:03.595 00:03:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@973 -- # kill 76898 00:19:03.595 00:03:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@978 -- # wait 76898 00:19:06.887 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:06.887 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:06.887 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:06.887 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # iptr 00:19:06.887 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-save 00:19:06.887 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:06.887 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-restore 00:19:06.887 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:06.887 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:06.887 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:06.887 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:06.887 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:06.887 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:06.887 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:06.887 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:06.887 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:06.887 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:06.887 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:06.887 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:06.887 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:06.887 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:06.887 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:06.887 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:06.887 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:06.887 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:06.887 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:06.887 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@300 -- # return 0 00:19:06.887 ************************************ 00:19:06.887 END TEST nvmf_multiconnection 00:19:06.887 ************************************ 00:19:06.887 00:19:06.887 real 0m52.582s 00:19:06.887 user 2m58.826s 00:19:06.887 sys 0m27.395s 00:19:06.888 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:06.888 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:06.888 00:03:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@50 -- # run_test nvmf_initiator_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:19:06.888 00:03:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:06.888 00:03:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:06.888 00:03:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:06.888 ************************************ 00:19:06.888 START TEST nvmf_initiator_timeout 00:19:06.888 ************************************ 00:19:06.888 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:19:06.888 * Looking for test storage... 00:19:06.888 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:06.888 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:06.888 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:06.888 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1693 -- # lcov --version 00:19:06.888 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:06.888 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:06.888 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:06.888 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:06.888 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:19:06.888 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:19:06.888 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:19:06.888 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:19:06.888 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:19:06.888 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:19:06.888 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:19:06.888 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:06.888 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@344 -- # case "$op" in 00:19:06.888 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@345 -- # : 1 00:19:06.888 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:06.888 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:06.888 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # decimal 1 00:19:06.888 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=1 00:19:06.888 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:06.888 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 1 00:19:06.888 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:19:06.888 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # decimal 2 00:19:06.888 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=2 00:19:06.888 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:06.888 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 2 00:19:06.888 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:19:06.888 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:06.888 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:06.888 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # return 0 00:19:06.888 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:06.888 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:06.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:06.888 --rc genhtml_branch_coverage=1 00:19:06.888 --rc genhtml_function_coverage=1 00:19:06.888 --rc genhtml_legend=1 00:19:06.888 --rc geninfo_all_blocks=1 00:19:06.888 --rc geninfo_unexecuted_blocks=1 00:19:06.888 00:19:06.888 ' 00:19:06.888 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:06.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:06.888 --rc genhtml_branch_coverage=1 00:19:06.888 --rc genhtml_function_coverage=1 00:19:06.888 --rc genhtml_legend=1 00:19:06.888 --rc geninfo_all_blocks=1 00:19:06.888 --rc geninfo_unexecuted_blocks=1 00:19:06.888 00:19:06.888 ' 00:19:06.888 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:06.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:06.888 --rc genhtml_branch_coverage=1 00:19:06.888 --rc genhtml_function_coverage=1 00:19:06.888 --rc genhtml_legend=1 00:19:06.888 --rc geninfo_all_blocks=1 00:19:06.888 --rc geninfo_unexecuted_blocks=1 00:19:06.888 00:19:06.888 ' 00:19:06.888 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:06.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:06.888 --rc genhtml_branch_coverage=1 00:19:06.888 --rc genhtml_function_coverage=1 00:19:06.888 --rc genhtml_legend=1 00:19:06.888 --rc geninfo_all_blocks=1 00:19:06.888 --rc geninfo_unexecuted_blocks=1 00:19:06.888 00:19:06.888 ' 00:19:06.888 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:06.888 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:19:06.888 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:06.888 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:06.888 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:06.888 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:06.888 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:06.888 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:06.888 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:06.888 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:06.888 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:06.888 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:06.888 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:19:06.888 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:19:06.888 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:06.888 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:06.888 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:06.888 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:06.888 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:06.888 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:19:06.888 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:06.888 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:06.888 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:06.888 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:06.888 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:06.888 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:06.888 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:19:06.889 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:06.889 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # : 0 00:19:06.889 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:06.889 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:06.889 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:06.889 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:06.889 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:06.889 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:06.889 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:06.889 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:06.889 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:06.889 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:06.889 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:06.889 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:06.889 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:19:06.889 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:06.889 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:06.889 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:06.889 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:06.889 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:06.889 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:06.889 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:06.889 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:06.889 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:06.889 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:06.889 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:06.889 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:06.889 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:06.889 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:06.889 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:06.889 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:06.889 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:06.889 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:06.889 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:06.889 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:06.889 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:06.889 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:06.889 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:06.889 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:06.889 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:06.889 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:06.889 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:06.889 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:06.889 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:06.889 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:06.889 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:06.889 Cannot find device "nvmf_init_br" 00:19:06.889 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@162 -- # true 00:19:06.889 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:06.889 Cannot find device "nvmf_init_br2" 00:19:06.889 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@163 -- # true 00:19:06.889 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:06.889 Cannot find device "nvmf_tgt_br" 00:19:06.889 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@164 -- # true 00:19:06.889 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:06.889 Cannot find device "nvmf_tgt_br2" 00:19:06.889 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@165 -- # true 00:19:06.889 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:06.889 Cannot find device "nvmf_init_br" 00:19:06.889 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@166 -- # true 00:19:06.889 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:06.889 Cannot find device "nvmf_init_br2" 00:19:06.889 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@167 -- # true 00:19:06.889 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:06.889 Cannot find device "nvmf_tgt_br" 00:19:06.889 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@168 -- # true 00:19:06.889 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:06.889 Cannot find device "nvmf_tgt_br2" 00:19:06.889 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@169 -- # true 00:19:06.889 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:06.889 Cannot find device "nvmf_br" 00:19:06.889 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@170 -- # true 00:19:06.889 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:06.889 Cannot find device "nvmf_init_if" 00:19:06.889 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@171 -- # true 00:19:06.889 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:06.889 Cannot find device "nvmf_init_if2" 00:19:06.889 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@172 -- # true 00:19:06.889 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:06.889 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:06.889 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@173 -- # true 00:19:06.889 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:06.889 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:06.889 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@174 -- # true 00:19:06.889 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:06.889 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:06.889 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:06.889 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:07.148 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:07.148 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:07.148 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:07.148 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:07.148 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:07.148 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:07.148 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:07.148 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:07.148 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:07.148 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:07.148 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:07.148 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:07.148 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:07.148 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:07.149 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:07.149 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:07.149 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:07.149 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:07.149 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:07.149 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:07.149 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:07.149 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:07.149 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:07.149 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:07.149 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:07.149 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:07.149 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:07.149 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:07.149 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:07.149 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:07.149 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:19:07.149 00:19:07.149 --- 10.0.0.3 ping statistics --- 00:19:07.149 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:07.149 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:19:07.149 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:07.149 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:07.149 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.039 ms 00:19:07.149 00:19:07.149 --- 10.0.0.4 ping statistics --- 00:19:07.149 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:07.149 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:19:07.149 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:07.149 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:07.149 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:19:07.149 00:19:07.149 --- 10.0.0.1 ping statistics --- 00:19:07.149 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:07.149 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:19:07.149 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:07.149 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:07.149 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.094 ms 00:19:07.149 00:19:07.149 --- 10.0.0.2 ping statistics --- 00:19:07.149 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:07.149 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:19:07.149 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:07.149 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@461 -- # return 0 00:19:07.149 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:07.149 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:07.149 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:07.149 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:07.149 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:07.149 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:07.149 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:07.149 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:19:07.149 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:07.149 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:07.149 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:07.149 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@509 -- # nvmfpid=78035 00:19:07.149 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@510 -- # waitforlisten 78035 00:19:07.149 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@835 -- # '[' -z 78035 ']' 00:19:07.149 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:07.149 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:07.149 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:07.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:07.149 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:07.149 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:07.149 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:07.408 [2024-11-19 00:03:13.925046] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:19:07.408 [2024-11-19 00:03:13.925212] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:07.667 [2024-11-19 00:03:14.116658] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:07.667 [2024-11-19 00:03:14.245971] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:07.667 [2024-11-19 00:03:14.246273] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:07.667 [2024-11-19 00:03:14.246314] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:07.667 [2024-11-19 00:03:14.246332] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:07.667 [2024-11-19 00:03:14.246348] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:07.667 [2024-11-19 00:03:14.248562] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:07.667 [2024-11-19 00:03:14.248714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:07.667 [2024-11-19 00:03:14.248886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:07.667 [2024-11-19 00:03:14.249459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:07.926 [2024-11-19 00:03:14.459967] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:08.493 00:03:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:08.493 00:03:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@868 -- # return 0 00:19:08.493 00:03:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:08.493 00:03:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:08.493 00:03:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:08.493 00:03:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:08.493 00:03:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:19:08.493 00:03:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:08.493 00:03:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.493 00:03:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:08.493 Malloc0 00:19:08.493 00:03:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.493 00:03:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:19:08.493 00:03:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.493 00:03:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:08.493 Delay0 00:19:08.493 00:03:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.493 00:03:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:08.493 00:03:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.493 00:03:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:08.493 [2024-11-19 00:03:15.047650] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:08.493 00:03:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.493 00:03:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:08.493 00:03:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.493 00:03:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:08.493 00:03:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.493 00:03:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:08.493 00:03:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.493 00:03:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:08.493 00:03:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.493 00:03:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:08.493 00:03:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.493 00:03:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:08.493 [2024-11-19 00:03:15.079918] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:08.493 00:03:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.493 00:03:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --hostid=2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:19:08.753 00:03:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:19:08.753 00:03:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1202 -- # local i=0 00:19:08.753 00:03:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:19:08.753 00:03:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:19:08.753 00:03:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1209 -- # sleep 2 00:19:10.655 00:03:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:19:10.655 00:03:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:19:10.655 00:03:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:19:10.655 00:03:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:19:10.655 00:03:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:19:10.655 00:03:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # return 0 00:19:10.655 00:03:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=78098 00:19:10.655 00:03:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:19:10.655 00:03:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:19:10.655 [global] 00:19:10.655 thread=1 00:19:10.655 invalidate=1 00:19:10.655 rw=write 00:19:10.655 time_based=1 00:19:10.655 runtime=60 00:19:10.655 ioengine=libaio 00:19:10.655 direct=1 00:19:10.655 bs=4096 00:19:10.655 iodepth=1 00:19:10.655 norandommap=0 00:19:10.655 numjobs=1 00:19:10.655 00:19:10.655 verify_dump=1 00:19:10.655 verify_backlog=512 00:19:10.655 verify_state_save=0 00:19:10.655 do_verify=1 00:19:10.655 verify=crc32c-intel 00:19:10.655 [job0] 00:19:10.655 filename=/dev/nvme0n1 00:19:10.655 Could not set queue depth (nvme0n1) 00:19:10.913 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:10.913 fio-3.35 00:19:10.913 Starting 1 thread 00:19:14.198 00:03:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:19:14.198 00:03:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.198 00:03:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:14.198 true 00:19:14.198 00:03:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.198 00:03:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:19:14.198 00:03:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.198 00:03:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:14.198 true 00:19:14.198 00:03:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.198 00:03:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:19:14.198 00:03:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.198 00:03:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:14.198 true 00:19:14.198 00:03:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.198 00:03:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:19:14.199 00:03:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.199 00:03:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:14.199 true 00:19:14.199 00:03:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.199 00:03:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:19:16.731 00:03:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:19:16.731 00:03:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.731 00:03:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:16.731 true 00:19:16.731 00:03:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.731 00:03:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:19:16.731 00:03:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.731 00:03:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:16.731 true 00:19:16.731 00:03:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.731 00:03:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:19:16.731 00:03:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.731 00:03:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:16.731 true 00:19:16.731 00:03:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.731 00:03:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:19:16.731 00:03:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.731 00:03:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:16.731 true 00:19:16.731 00:03:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.731 00:03:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:19:16.731 00:03:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 78098 00:20:12.959 00:20:12.959 job0: (groupid=0, jobs=1): err= 0: pid=78120: Tue Nov 19 00:04:17 2024 00:20:12.959 read: IOPS=672, BW=2688KiB/s (2753kB/s)(158MiB/60000msec) 00:20:12.959 slat (usec): min=11, max=16707, avg=15.67, stdev=112.47 00:20:12.959 clat (usec): min=184, max=40833k, avg=1254.54, stdev=203344.12 00:20:12.959 lat (usec): min=197, max=40833k, avg=1270.21, stdev=203344.15 00:20:12.959 clat percentiles (usec): 00:20:12.959 | 1.00th=[ 200], 5.00th=[ 206], 10.00th=[ 210], 20.00th=[ 219], 00:20:12.959 | 30.00th=[ 225], 40.00th=[ 231], 50.00th=[ 237], 60.00th=[ 243], 00:20:12.959 | 70.00th=[ 251], 80.00th=[ 260], 90.00th=[ 277], 95.00th=[ 293], 00:20:12.959 | 99.00th=[ 326], 99.50th=[ 355], 99.90th=[ 627], 99.95th=[ 816], 00:20:12.959 | 99.99th=[ 1483] 00:20:12.959 write: IOPS=674, BW=2697KiB/s (2761kB/s)(158MiB/60000msec); 0 zone resets 00:20:12.959 slat (usec): min=14, max=880, avg=21.72, stdev= 8.16 00:20:12.959 clat (usec): min=142, max=1247, avg=191.89, stdev=29.75 00:20:12.959 lat (usec): min=161, max=1270, avg=213.61, stdev=31.31 00:20:12.959 clat percentiles (usec): 00:20:12.959 | 1.00th=[ 157], 5.00th=[ 165], 10.00th=[ 167], 20.00th=[ 174], 00:20:12.959 | 30.00th=[ 178], 40.00th=[ 182], 50.00th=[ 188], 60.00th=[ 192], 00:20:12.959 | 70.00th=[ 200], 80.00th=[ 208], 90.00th=[ 221], 95.00th=[ 235], 00:20:12.959 | 99.00th=[ 265], 99.50th=[ 285], 99.90th=[ 529], 99.95th=[ 676], 00:20:12.959 | 99.99th=[ 922] 00:20:12.959 bw ( KiB/s): min= 4096, max= 9496, per=100.00%, avg=8320.61, stdev=804.63, samples=38 00:20:12.959 iops : min= 1024, max= 2374, avg=2080.13, stdev=201.16, samples=38 00:20:12.959 lat (usec) : 250=83.68%, 500=16.15%, 750=0.13%, 1000=0.03% 00:20:12.959 lat (msec) : 2=0.01%, 10=0.01%, >=2000=0.01% 00:20:12.959 cpu : usr=0.57%, sys=1.91%, ctx=80778, majf=0, minf=5 00:20:12.959 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:12.959 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:12.959 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:12.959 issued rwts: total=40324,40448,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:12.959 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:12.959 00:20:12.959 Run status group 0 (all jobs): 00:20:12.959 READ: bw=2688KiB/s (2753kB/s), 2688KiB/s-2688KiB/s (2753kB/s-2753kB/s), io=158MiB (165MB), run=60000-60000msec 00:20:12.959 WRITE: bw=2697KiB/s (2761kB/s), 2697KiB/s-2697KiB/s (2761kB/s-2761kB/s), io=158MiB (166MB), run=60000-60000msec 00:20:12.959 00:20:12.959 Disk stats (read/write): 00:20:12.959 nvme0n1: ios=40135/40423, merge=0/0, ticks=10207/8390, in_queue=18597, util=99.59% 00:20:12.959 00:04:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:12.959 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:12.959 00:04:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:20:12.959 00:04:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # local i=0 00:20:12.959 00:04:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:20:12.959 00:04:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:12.959 00:04:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:12.959 00:04:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:20:12.959 nvmf hotplug test: fio successful as expected 00:20:12.959 00:04:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1235 -- # return 0 00:20:12.959 00:04:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:20:12.959 00:04:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:20:12.959 00:04:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:12.959 00:04:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.959 00:04:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:12.959 00:04:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.960 00:04:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:20:12.960 00:04:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:20:12.960 00:04:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:20:12.960 00:04:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:12.960 00:04:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # sync 00:20:12.960 00:04:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:12.960 00:04:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set +e 00:20:12.960 00:04:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:12.960 00:04:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:12.960 rmmod nvme_tcp 00:20:12.960 rmmod nvme_fabrics 00:20:12.960 rmmod nvme_keyring 00:20:12.960 00:04:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:12.960 00:04:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@128 -- # set -e 00:20:12.960 00:04:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@129 -- # return 0 00:20:12.960 00:04:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@517 -- # '[' -n 78035 ']' 00:20:12.960 00:04:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@518 -- # killprocess 78035 00:20:12.960 00:04:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # '[' -z 78035 ']' 00:20:12.960 00:04:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # kill -0 78035 00:20:12.960 00:04:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # uname 00:20:12.960 00:04:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:12.960 00:04:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78035 00:20:12.960 killing process with pid 78035 00:20:12.960 00:04:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:12.960 00:04:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:12.960 00:04:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78035' 00:20:12.960 00:04:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@973 -- # kill 78035 00:20:12.960 00:04:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@978 -- # wait 78035 00:20:12.960 00:04:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:12.960 00:04:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:12.960 00:04:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:12.960 00:04:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # iptr 00:20:12.960 00:04:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-save 00:20:12.960 00:04:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:12.960 00:04:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:20:12.960 00:04:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:12.960 00:04:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:12.960 00:04:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:12.960 00:04:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:12.960 00:04:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:12.960 00:04:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:12.960 00:04:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:12.960 00:04:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:12.960 00:04:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:12.960 00:04:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:12.960 00:04:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:12.960 00:04:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:12.960 00:04:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:12.960 00:04:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:12.960 00:04:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:12.960 00:04:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:12.960 00:04:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:12.960 00:04:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:12.960 00:04:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:12.960 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@300 -- # return 0 00:20:12.960 00:20:12.960 real 1m5.811s 00:20:12.960 user 3m55.472s 00:20:12.960 sys 0m22.008s 00:20:12.960 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:12.960 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:12.960 ************************************ 00:20:12.960 END TEST nvmf_initiator_timeout 00:20:12.960 ************************************ 00:20:12.960 00:04:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ virt == phy ]] 00:20:12.960 00:04:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:20:12.960 00:04:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:12.960 00:04:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:12.960 00:04:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:12.960 ************************************ 00:20:12.960 START TEST nvmf_nsid 00:20:12.960 ************************************ 00:20:12.960 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:20:12.960 * Looking for test storage... 00:20:12.960 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:12.960 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:12.960 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 00:20:12.960 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:12.960 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:12.960 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:12.960 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:12.960 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:12.960 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:20:12.960 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:20:12.960 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:20:12.960 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:20:12.960 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:20:12.960 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:20:12.960 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:20:12.960 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:12.960 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:20:12.960 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:20:12.960 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:12.960 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:12.960 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:20:12.960 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:20:12.960 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:12.960 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:20:12.960 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:20:12.960 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:20:12.960 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:20:12.960 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:12.960 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:20:12.960 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:20:12.960 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:12.960 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:12.960 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:20:12.960 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:12.960 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:12.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:12.960 --rc genhtml_branch_coverage=1 00:20:12.960 --rc genhtml_function_coverage=1 00:20:12.960 --rc genhtml_legend=1 00:20:12.960 --rc geninfo_all_blocks=1 00:20:12.960 --rc geninfo_unexecuted_blocks=1 00:20:12.960 00:20:12.960 ' 00:20:12.960 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:12.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:12.960 --rc genhtml_branch_coverage=1 00:20:12.960 --rc genhtml_function_coverage=1 00:20:12.960 --rc genhtml_legend=1 00:20:12.960 --rc geninfo_all_blocks=1 00:20:12.960 --rc geninfo_unexecuted_blocks=1 00:20:12.960 00:20:12.960 ' 00:20:12.961 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:12.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:12.961 --rc genhtml_branch_coverage=1 00:20:12.961 --rc genhtml_function_coverage=1 00:20:12.961 --rc genhtml_legend=1 00:20:12.961 --rc geninfo_all_blocks=1 00:20:12.961 --rc geninfo_unexecuted_blocks=1 00:20:12.961 00:20:12.961 ' 00:20:12.961 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:12.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:12.961 --rc genhtml_branch_coverage=1 00:20:12.961 --rc genhtml_function_coverage=1 00:20:12.961 --rc genhtml_legend=1 00:20:12.961 --rc geninfo_all_blocks=1 00:20:12.961 --rc geninfo_unexecuted_blocks=1 00:20:12.961 00:20:12.961 ' 00:20:12.961 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:12.961 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:20:12.961 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:12.961 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:12.961 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:12.961 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:12.961 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:12.961 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:12.961 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:12.961 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:12.961 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:12.961 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:12.961 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:20:12.961 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:20:12.961 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:12.961 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:12.961 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:12.961 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:12.961 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:12.961 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:20:12.961 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:12.961 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:12.961 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:12.961 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.961 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.961 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.961 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:20:12.961 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.961 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:20:12.961 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:12.961 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:12.961 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:12.961 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:12.961 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:12.961 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:12.961 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:12.961 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:12.961 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:12.961 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:12.961 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:20:12.961 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:20:12.961 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:20:12.961 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:20:12.961 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:20:12.961 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:20:12.961 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:12.961 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:12.961 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:12.961 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:12.961 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:12.961 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:12.961 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:12.961 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:12.961 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:20:12.961 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:20:12.961 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:20:12.961 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:20:12.961 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:20:12.961 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@460 -- # nvmf_veth_init 00:20:12.961 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:12.961 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:12.961 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:12.961 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:12.961 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:12.961 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:12.961 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:12.961 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:12.961 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:12.961 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:12.961 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:12.961 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:12.961 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:12.961 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:12.961 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:12.961 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:12.961 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:12.961 Cannot find device "nvmf_init_br" 00:20:12.961 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # true 00:20:12.961 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:12.961 Cannot find device "nvmf_init_br2" 00:20:12.961 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # true 00:20:12.961 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:12.961 Cannot find device "nvmf_tgt_br" 00:20:12.961 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # true 00:20:12.961 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:12.961 Cannot find device "nvmf_tgt_br2" 00:20:12.961 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # true 00:20:12.961 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:12.961 Cannot find device "nvmf_init_br" 00:20:12.961 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # true 00:20:12.962 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:12.962 Cannot find device "nvmf_init_br2" 00:20:12.962 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # true 00:20:12.962 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:12.962 Cannot find device "nvmf_tgt_br" 00:20:12.962 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # true 00:20:12.962 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:12.962 Cannot find device "nvmf_tgt_br2" 00:20:12.962 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # true 00:20:12.962 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:12.962 Cannot find device "nvmf_br" 00:20:12.962 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # true 00:20:12.962 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:12.962 Cannot find device "nvmf_init_if" 00:20:12.962 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # true 00:20:12.962 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:12.962 Cannot find device "nvmf_init_if2" 00:20:12.962 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # true 00:20:12.962 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:12.962 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:12.962 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # true 00:20:12.962 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:12.962 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:12.962 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # true 00:20:12.962 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:12.962 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:12.962 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:12.962 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:12.962 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:12.962 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:12.962 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:12.962 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:12.962 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:12.962 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:12.962 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:12.962 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:12.962 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:12.962 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:12.962 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:12.962 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:12.962 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:12.962 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:12.962 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:12.962 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:12.962 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:12.962 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:12.962 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:13.221 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:13.221 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:13.221 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:13.221 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:13.222 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:13.222 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:13.222 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:13.222 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:13.222 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:13.222 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:13.222 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:13.222 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:20:13.222 00:20:13.222 --- 10.0.0.3 ping statistics --- 00:20:13.222 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:13.222 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:20:13.222 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:13.222 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:13.222 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:20:13.222 00:20:13.222 --- 10.0.0.4 ping statistics --- 00:20:13.222 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:13.222 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:20:13.222 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:13.222 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:13.222 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:20:13.222 00:20:13.222 --- 10.0.0.1 ping statistics --- 00:20:13.222 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:13.222 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:20:13.222 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:13.222 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:13.222 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.047 ms 00:20:13.222 00:20:13.222 --- 10.0.0.2 ping statistics --- 00:20:13.222 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:13.222 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:20:13.222 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:13.222 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@461 -- # return 0 00:20:13.222 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:13.222 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:13.222 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:13.222 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:13.222 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:13.222 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:13.222 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:13.222 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:20:13.222 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:13.222 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:13.222 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:20:13.222 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=78986 00:20:13.222 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:20:13.222 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 78986 00:20:13.222 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 78986 ']' 00:20:13.222 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:13.222 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:13.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:13.222 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:13.222 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:13.222 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:20:13.222 [2024-11-19 00:04:19.865467] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:20:13.222 [2024-11-19 00:04:19.865848] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:13.481 [2024-11-19 00:04:20.052537] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:13.740 [2024-11-19 00:04:20.177149] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:13.740 [2024-11-19 00:04:20.177449] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:13.740 [2024-11-19 00:04:20.177664] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:13.740 [2024-11-19 00:04:20.177900] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:13.740 [2024-11-19 00:04:20.178120] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:13.740 [2024-11-19 00:04:20.179644] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:13.740 [2024-11-19 00:04:20.387403] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:14.308 00:04:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:14.308 00:04:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:20:14.308 00:04:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:14.308 00:04:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:14.308 00:04:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:20:14.308 00:04:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:14.308 00:04:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:20:14.308 00:04:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=79024 00:20:14.308 00:04:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.3 00:20:14.308 00:04:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:20:14.308 00:04:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:20:14.308 00:04:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:20:14.308 00:04:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:14.308 00:04:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:14.308 00:04:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:14.309 00:04:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:14.309 00:04:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:14.309 00:04:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:14.309 00:04:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:14.309 00:04:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:14.309 00:04:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:14.309 00:04:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:20:14.309 00:04:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:20:14.309 00:04:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=c9affa69-524c-4eac-b561-66c1b0c8726c 00:20:14.309 00:04:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:20:14.309 00:04:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=0a8c3ce8-1962-4025-8198-500f4851b0a0 00:20:14.309 00:04:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:20:14.309 00:04:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=a0841f78-b087-4741-897d-c2314446cd44 00:20:14.309 00:04:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:20:14.309 00:04:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.309 00:04:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:20:14.309 null0 00:20:14.309 null1 00:20:14.309 null2 00:20:14.309 [2024-11-19 00:04:20.934813] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:14.309 [2024-11-19 00:04:20.959032] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:14.309 00:04:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.309 00:04:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 79024 /var/tmp/tgt2.sock 00:20:14.309 00:04:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 79024 ']' 00:20:14.309 00:04:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:20:14.309 00:04:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:14.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:20:14.568 00:04:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:20:14.568 00:04:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:14.568 00:04:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:20:14.568 [2024-11-19 00:04:21.023075] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:20:14.568 [2024-11-19 00:04:21.023253] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79024 ] 00:20:14.568 [2024-11-19 00:04:21.210930] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:14.826 [2024-11-19 00:04:21.348571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:15.085 [2024-11-19 00:04:21.579033] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:15.653 00:04:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:15.653 00:04:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:20:15.653 00:04:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:20:15.912 [2024-11-19 00:04:22.554960] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:15.912 [2024-11-19 00:04:22.571081] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:20:15.912 nvme0n1 nvme0n2 00:20:15.912 nvme1n1 00:20:16.172 00:04:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:20:16.172 00:04:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:20:16.172 00:04:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --hostid=2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:20:16.172 00:04:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:20:16.172 00:04:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:20:16.172 00:04:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:20:16.172 00:04:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:20:16.172 00:04:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:20:16.172 00:04:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:20:16.172 00:04:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:20:16.172 00:04:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:20:16.172 00:04:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:20:16.172 00:04:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:20:16.172 00:04:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:20:16.172 00:04:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:20:16.172 00:04:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:20:17.109 00:04:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:20:17.109 00:04:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:20:17.109 00:04:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:20:17.109 00:04:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:20:17.109 00:04:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:20:17.109 00:04:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid c9affa69-524c-4eac-b561-66c1b0c8726c 00:20:17.109 00:04:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:20:17.419 00:04:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:20:17.419 00:04:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:20:17.419 00:04:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:20:17.419 00:04:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:20:17.419 00:04:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=c9affa69524c4eacb56166c1b0c8726c 00:20:17.419 00:04:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo C9AFFA69524C4EACB56166C1B0C8726C 00:20:17.419 00:04:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ C9AFFA69524C4EACB56166C1B0C8726C == \C\9\A\F\F\A\6\9\5\2\4\C\4\E\A\C\B\5\6\1\6\6\C\1\B\0\C\8\7\2\6\C ]] 00:20:17.419 00:04:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:20:17.419 00:04:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:20:17.419 00:04:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:20:17.419 00:04:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:20:17.419 00:04:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:20:17.419 00:04:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:20:17.420 00:04:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:20:17.420 00:04:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 0a8c3ce8-1962-4025-8198-500f4851b0a0 00:20:17.420 00:04:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:20:17.420 00:04:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:20:17.420 00:04:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:20:17.420 00:04:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:20:17.420 00:04:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:20:17.420 00:04:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=0a8c3ce8196240258198500f4851b0a0 00:20:17.420 00:04:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 0A8C3CE8196240258198500F4851B0A0 00:20:17.420 00:04:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 0A8C3CE8196240258198500F4851B0A0 == \0\A\8\C\3\C\E\8\1\9\6\2\4\0\2\5\8\1\9\8\5\0\0\F\4\8\5\1\B\0\A\0 ]] 00:20:17.420 00:04:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:20:17.420 00:04:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:20:17.420 00:04:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:20:17.420 00:04:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:20:17.420 00:04:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:20:17.420 00:04:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:20:17.420 00:04:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:20:17.420 00:04:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid a0841f78-b087-4741-897d-c2314446cd44 00:20:17.420 00:04:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:20:17.420 00:04:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:20:17.420 00:04:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:20:17.420 00:04:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:20:17.420 00:04:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:20:17.420 00:04:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=a0841f78b0874741897dc2314446cd44 00:20:17.420 00:04:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo A0841F78B0874741897DC2314446CD44 00:20:17.420 00:04:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ A0841F78B0874741897DC2314446CD44 == \A\0\8\4\1\F\7\8\B\0\8\7\4\7\4\1\8\9\7\D\C\2\3\1\4\4\4\6\C\D\4\4 ]] 00:20:17.420 00:04:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:20:17.679 00:04:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:20:17.679 00:04:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:20:17.679 00:04:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 79024 00:20:17.679 00:04:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 79024 ']' 00:20:17.679 00:04:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 79024 00:20:17.679 00:04:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:20:17.679 00:04:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:17.679 00:04:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79024 00:20:17.679 00:04:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:17.680 00:04:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:17.680 killing process with pid 79024 00:20:17.680 00:04:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79024' 00:20:17.680 00:04:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 79024 00:20:17.680 00:04:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 79024 00:20:19.583 00:04:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:20:19.583 00:04:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:19.583 00:04:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:20:19.583 00:04:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:19.583 00:04:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:20:19.583 00:04:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:19.583 00:04:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:19.583 rmmod nvme_tcp 00:20:19.583 rmmod nvme_fabrics 00:20:19.583 rmmod nvme_keyring 00:20:19.583 00:04:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:19.584 00:04:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:20:19.584 00:04:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:20:19.843 00:04:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 78986 ']' 00:20:19.843 00:04:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 78986 00:20:19.843 00:04:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 78986 ']' 00:20:19.843 00:04:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 78986 00:20:19.843 00:04:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:20:19.843 00:04:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:19.843 00:04:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78986 00:20:19.843 00:04:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:19.843 00:04:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:19.843 killing process with pid 78986 00:20:19.843 00:04:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78986' 00:20:19.843 00:04:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 78986 00:20:19.843 00:04:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 78986 00:20:20.780 00:04:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:20.780 00:04:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:20.780 00:04:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:20.780 00:04:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:20:20.780 00:04:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:20:20.780 00:04:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:20.780 00:04:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:20:20.780 00:04:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:20.780 00:04:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:20.780 00:04:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:20.780 00:04:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:20.780 00:04:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:20.780 00:04:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:20.780 00:04:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:20.780 00:04:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:20.780 00:04:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:20.780 00:04:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:20.780 00:04:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:20.780 00:04:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:20.780 00:04:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:20.780 00:04:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:20.780 00:04:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:20.780 00:04:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:20.780 00:04:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:20.780 00:04:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:20.780 00:04:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:20.780 00:04:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@300 -- # return 0 00:20:20.780 00:20:20.780 real 0m8.312s 00:20:20.780 user 0m13.118s 00:20:20.780 sys 0m1.900s 00:20:20.781 00:04:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:20.781 00:04:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:20:20.781 ************************************ 00:20:20.781 END TEST nvmf_nsid 00:20:20.781 ************************************ 00:20:20.781 00:04:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:20:20.781 00:20:20.781 real 7m41.206s 00:20:20.781 user 18m38.831s 00:20:20.781 sys 1m55.675s 00:20:20.781 00:04:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:20.781 ************************************ 00:20:20.781 END TEST nvmf_target_extra 00:20:20.781 ************************************ 00:20:20.781 00:04:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:21.040 00:04:27 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:20:21.040 00:04:27 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:21.040 00:04:27 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:21.040 00:04:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:21.040 ************************************ 00:20:21.040 START TEST nvmf_host 00:20:21.040 ************************************ 00:20:21.040 00:04:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:20:21.040 * Looking for test storage... 00:20:21.040 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:20:21.040 00:04:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:21.040 00:04:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:20:21.040 00:04:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:21.040 00:04:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:21.040 00:04:27 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:21.040 00:04:27 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:21.040 00:04:27 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:21.040 00:04:27 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:20:21.040 00:04:27 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:20:21.040 00:04:27 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:20:21.040 00:04:27 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:20:21.040 00:04:27 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:20:21.040 00:04:27 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:20:21.040 00:04:27 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:20:21.040 00:04:27 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:21.040 00:04:27 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:20:21.040 00:04:27 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:20:21.040 00:04:27 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:21.040 00:04:27 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:21.040 00:04:27 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:20:21.040 00:04:27 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:20:21.040 00:04:27 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:21.040 00:04:27 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:20:21.040 00:04:27 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:20:21.040 00:04:27 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:20:21.040 00:04:27 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:20:21.040 00:04:27 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:21.040 00:04:27 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:20:21.040 00:04:27 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:20:21.040 00:04:27 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:21.040 00:04:27 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:21.040 00:04:27 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:20:21.040 00:04:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:21.040 00:04:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:21.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:21.040 --rc genhtml_branch_coverage=1 00:20:21.040 --rc genhtml_function_coverage=1 00:20:21.040 --rc genhtml_legend=1 00:20:21.040 --rc geninfo_all_blocks=1 00:20:21.040 --rc geninfo_unexecuted_blocks=1 00:20:21.040 00:20:21.040 ' 00:20:21.040 00:04:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:21.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:21.040 --rc genhtml_branch_coverage=1 00:20:21.040 --rc genhtml_function_coverage=1 00:20:21.040 --rc genhtml_legend=1 00:20:21.041 --rc geninfo_all_blocks=1 00:20:21.041 --rc geninfo_unexecuted_blocks=1 00:20:21.041 00:20:21.041 ' 00:20:21.041 00:04:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:21.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:21.041 --rc genhtml_branch_coverage=1 00:20:21.041 --rc genhtml_function_coverage=1 00:20:21.041 --rc genhtml_legend=1 00:20:21.041 --rc geninfo_all_blocks=1 00:20:21.041 --rc geninfo_unexecuted_blocks=1 00:20:21.041 00:20:21.041 ' 00:20:21.041 00:04:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:21.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:21.041 --rc genhtml_branch_coverage=1 00:20:21.041 --rc genhtml_function_coverage=1 00:20:21.041 --rc genhtml_legend=1 00:20:21.041 --rc geninfo_all_blocks=1 00:20:21.041 --rc geninfo_unexecuted_blocks=1 00:20:21.041 00:20:21.041 ' 00:20:21.041 00:04:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:21.041 00:04:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:20:21.041 00:04:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:21.041 00:04:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:21.041 00:04:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:21.041 00:04:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:21.041 00:04:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:21.041 00:04:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:21.041 00:04:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:21.041 00:04:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:21.041 00:04:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:21.041 00:04:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:21.041 00:04:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:20:21.041 00:04:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:20:21.041 00:04:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:21.041 00:04:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:21.041 00:04:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:21.041 00:04:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:21.041 00:04:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:21.041 00:04:27 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:20:21.041 00:04:27 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:21.041 00:04:27 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:21.041 00:04:27 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:21.041 00:04:27 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:21.041 00:04:27 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:21.041 00:04:27 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:21.041 00:04:27 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:20:21.041 00:04:27 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:21.041 00:04:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:20:21.041 00:04:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:21.041 00:04:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:21.041 00:04:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:21.041 00:04:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:21.041 00:04:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:21.041 00:04:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:21.041 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:21.041 00:04:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:21.041 00:04:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:21.041 00:04:27 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:21.041 00:04:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:20:21.041 00:04:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:20:21.041 00:04:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 1 -eq 0 ]] 00:20:21.041 00:04:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:20:21.041 00:04:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:21.041 00:04:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:21.041 00:04:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:21.041 ************************************ 00:20:21.041 START TEST nvmf_identify 00:20:21.041 ************************************ 00:20:21.041 00:04:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:20:21.301 * Looking for test storage... 00:20:21.301 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:21.301 00:04:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:21.301 00:04:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:20:21.301 00:04:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:21.301 00:04:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:21.301 00:04:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:21.301 00:04:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:21.301 00:04:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:21.301 00:04:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:20:21.301 00:04:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:20:21.301 00:04:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:20:21.301 00:04:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:20:21.301 00:04:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:20:21.301 00:04:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:20:21.301 00:04:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:20:21.301 00:04:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:21.301 00:04:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:20:21.301 00:04:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:20:21.301 00:04:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:21.301 00:04:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:21.301 00:04:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:20:21.301 00:04:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:20:21.301 00:04:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:21.301 00:04:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:20:21.301 00:04:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:20:21.301 00:04:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:20:21.301 00:04:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:20:21.301 00:04:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:21.301 00:04:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:20:21.301 00:04:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:20:21.301 00:04:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:21.301 00:04:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:21.301 00:04:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:20:21.301 00:04:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:21.301 00:04:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:21.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:21.301 --rc genhtml_branch_coverage=1 00:20:21.301 --rc genhtml_function_coverage=1 00:20:21.301 --rc genhtml_legend=1 00:20:21.301 --rc geninfo_all_blocks=1 00:20:21.301 --rc geninfo_unexecuted_blocks=1 00:20:21.301 00:20:21.301 ' 00:20:21.301 00:04:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:21.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:21.301 --rc genhtml_branch_coverage=1 00:20:21.301 --rc genhtml_function_coverage=1 00:20:21.301 --rc genhtml_legend=1 00:20:21.301 --rc geninfo_all_blocks=1 00:20:21.301 --rc geninfo_unexecuted_blocks=1 00:20:21.301 00:20:21.301 ' 00:20:21.301 00:04:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:21.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:21.301 --rc genhtml_branch_coverage=1 00:20:21.301 --rc genhtml_function_coverage=1 00:20:21.301 --rc genhtml_legend=1 00:20:21.301 --rc geninfo_all_blocks=1 00:20:21.301 --rc geninfo_unexecuted_blocks=1 00:20:21.301 00:20:21.301 ' 00:20:21.301 00:04:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:21.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:21.301 --rc genhtml_branch_coverage=1 00:20:21.301 --rc genhtml_function_coverage=1 00:20:21.301 --rc genhtml_legend=1 00:20:21.301 --rc geninfo_all_blocks=1 00:20:21.301 --rc geninfo_unexecuted_blocks=1 00:20:21.301 00:20:21.301 ' 00:20:21.301 00:04:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:21.301 00:04:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:20:21.301 00:04:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:21.301 00:04:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:21.301 00:04:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:21.301 00:04:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:21.301 00:04:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:21.301 00:04:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:21.301 00:04:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:21.302 00:04:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:21.302 00:04:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:21.302 00:04:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:21.302 00:04:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:20:21.302 00:04:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:20:21.302 00:04:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:21.302 00:04:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:21.302 00:04:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:21.302 00:04:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:21.302 00:04:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:21.302 00:04:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:20:21.302 00:04:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:21.302 00:04:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:21.302 00:04:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:21.302 00:04:27 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:21.302 00:04:27 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:21.302 00:04:27 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:21.302 00:04:27 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:20:21.302 00:04:27 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:21.302 00:04:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:20:21.302 00:04:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:21.302 00:04:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:21.302 00:04:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:21.302 00:04:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:21.302 00:04:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:21.302 00:04:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:21.302 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:21.302 00:04:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:21.302 00:04:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:21.302 00:04:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:21.302 00:04:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:21.302 00:04:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:21.302 00:04:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:20:21.302 00:04:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:21.302 00:04:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:21.302 00:04:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:21.302 00:04:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:21.302 00:04:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:21.302 00:04:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:21.302 00:04:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:21.302 00:04:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:21.302 00:04:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:20:21.302 00:04:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:20:21.302 00:04:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:20:21.302 00:04:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:20:21.302 00:04:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:20:21.302 00:04:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@460 -- # nvmf_veth_init 00:20:21.302 00:04:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:21.302 00:04:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:21.302 00:04:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:21.302 00:04:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:21.302 00:04:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:21.302 00:04:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:21.302 00:04:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:21.302 00:04:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:21.302 00:04:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:21.302 00:04:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:21.302 00:04:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:21.302 00:04:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:21.302 00:04:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:21.302 00:04:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:21.302 00:04:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:21.302 00:04:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:21.302 00:04:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:21.302 Cannot find device "nvmf_init_br" 00:20:21.302 00:04:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # true 00:20:21.302 00:04:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:21.302 Cannot find device "nvmf_init_br2" 00:20:21.302 00:04:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # true 00:20:21.302 00:04:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:21.302 Cannot find device "nvmf_tgt_br" 00:20:21.302 00:04:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # true 00:20:21.302 00:04:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:21.561 Cannot find device "nvmf_tgt_br2" 00:20:21.561 00:04:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # true 00:20:21.561 00:04:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:21.561 Cannot find device "nvmf_init_br" 00:20:21.561 00:04:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # true 00:20:21.561 00:04:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:21.561 Cannot find device "nvmf_init_br2" 00:20:21.561 00:04:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # true 00:20:21.561 00:04:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:21.561 Cannot find device "nvmf_tgt_br" 00:20:21.561 00:04:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # true 00:20:21.561 00:04:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:21.561 Cannot find device "nvmf_tgt_br2" 00:20:21.561 00:04:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # true 00:20:21.561 00:04:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:21.561 Cannot find device "nvmf_br" 00:20:21.561 00:04:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # true 00:20:21.561 00:04:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:21.561 Cannot find device "nvmf_init_if" 00:20:21.561 00:04:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # true 00:20:21.561 00:04:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:21.561 Cannot find device "nvmf_init_if2" 00:20:21.561 00:04:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # true 00:20:21.561 00:04:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:21.561 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:21.561 00:04:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # true 00:20:21.561 00:04:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:21.561 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:21.561 00:04:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # true 00:20:21.561 00:04:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:21.561 00:04:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:21.561 00:04:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:21.561 00:04:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:21.561 00:04:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:21.561 00:04:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:21.561 00:04:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:21.562 00:04:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:21.562 00:04:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:21.562 00:04:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:21.562 00:04:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:21.562 00:04:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:21.562 00:04:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:21.562 00:04:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:21.562 00:04:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:21.562 00:04:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:21.562 00:04:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:21.562 00:04:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:21.562 00:04:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:21.562 00:04:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:21.562 00:04:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:21.562 00:04:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:21.562 00:04:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:21.562 00:04:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:21.562 00:04:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:21.821 00:04:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:21.821 00:04:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:21.821 00:04:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:21.821 00:04:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:21.821 00:04:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:21.821 00:04:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:21.821 00:04:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:21.821 00:04:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:21.821 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:21.821 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.070 ms 00:20:21.821 00:20:21.822 --- 10.0.0.3 ping statistics --- 00:20:21.822 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:21.822 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:20:21.822 00:04:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:21.822 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:21.822 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.034 ms 00:20:21.822 00:20:21.822 --- 10.0.0.4 ping statistics --- 00:20:21.822 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:21.822 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:20:21.822 00:04:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:21.822 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:21.822 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:20:21.822 00:20:21.822 --- 10.0.0.1 ping statistics --- 00:20:21.822 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:21.822 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:20:21.822 00:04:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:21.822 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:21.822 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.051 ms 00:20:21.822 00:20:21.822 --- 10.0.0.2 ping statistics --- 00:20:21.822 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:21.822 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:20:21.822 00:04:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:21.822 00:04:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@461 -- # return 0 00:20:21.822 00:04:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:21.822 00:04:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:21.822 00:04:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:21.822 00:04:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:21.822 00:04:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:21.822 00:04:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:21.822 00:04:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:21.822 00:04:28 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:20:21.822 00:04:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:21.822 00:04:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:21.822 00:04:28 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=79413 00:20:21.822 00:04:28 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:21.822 00:04:28 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:21.822 00:04:28 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 79413 00:20:21.822 00:04:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 79413 ']' 00:20:21.822 00:04:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:21.822 00:04:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:21.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:21.822 00:04:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:21.822 00:04:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:21.822 00:04:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:21.822 [2024-11-19 00:04:28.416652] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:20:21.822 [2024-11-19 00:04:28.416861] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:22.081 [2024-11-19 00:04:28.583490] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:22.081 [2024-11-19 00:04:28.675008] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:22.081 [2024-11-19 00:04:28.675080] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:22.081 [2024-11-19 00:04:28.675113] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:22.081 [2024-11-19 00:04:28.675124] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:22.081 [2024-11-19 00:04:28.675136] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:22.081 [2024-11-19 00:04:28.676898] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:22.081 [2024-11-19 00:04:28.677020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:22.081 [2024-11-19 00:04:28.677127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:22.081 [2024-11-19 00:04:28.677151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:22.341 [2024-11-19 00:04:28.853289] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:22.908 00:04:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:22.908 00:04:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:20:22.908 00:04:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:22.908 00:04:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.908 00:04:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:22.908 [2024-11-19 00:04:29.429908] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:22.908 00:04:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.908 00:04:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:20:22.908 00:04:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:22.908 00:04:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:22.908 00:04:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:22.908 00:04:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.908 00:04:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:22.908 Malloc0 00:20:22.908 00:04:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.908 00:04:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:22.908 00:04:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.908 00:04:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:22.908 00:04:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.908 00:04:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:20:22.908 00:04:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.908 00:04:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:22.908 00:04:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.908 00:04:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:22.908 00:04:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.908 00:04:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:22.908 [2024-11-19 00:04:29.574207] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:22.908 00:04:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.908 00:04:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:20:22.908 00:04:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.908 00:04:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:22.908 00:04:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.908 00:04:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:20:22.908 00:04:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.908 00:04:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:22.908 [ 00:20:22.908 { 00:20:22.908 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:22.908 "subtype": "Discovery", 00:20:22.908 "listen_addresses": [ 00:20:22.908 { 00:20:22.908 "trtype": "TCP", 00:20:22.908 "adrfam": "IPv4", 00:20:22.908 "traddr": "10.0.0.3", 00:20:22.908 "trsvcid": "4420" 00:20:22.908 } 00:20:22.908 ], 00:20:22.908 "allow_any_host": true, 00:20:22.908 "hosts": [] 00:20:22.908 }, 00:20:22.908 { 00:20:23.167 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:23.167 "subtype": "NVMe", 00:20:23.167 "listen_addresses": [ 00:20:23.167 { 00:20:23.167 "trtype": "TCP", 00:20:23.167 "adrfam": "IPv4", 00:20:23.167 "traddr": "10.0.0.3", 00:20:23.167 "trsvcid": "4420" 00:20:23.167 } 00:20:23.167 ], 00:20:23.167 "allow_any_host": true, 00:20:23.167 "hosts": [], 00:20:23.167 "serial_number": "SPDK00000000000001", 00:20:23.167 "model_number": "SPDK bdev Controller", 00:20:23.167 "max_namespaces": 32, 00:20:23.167 "min_cntlid": 1, 00:20:23.167 "max_cntlid": 65519, 00:20:23.167 "namespaces": [ 00:20:23.167 { 00:20:23.167 "nsid": 1, 00:20:23.167 "bdev_name": "Malloc0", 00:20:23.167 "name": "Malloc0", 00:20:23.167 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:20:23.167 "eui64": "ABCDEF0123456789", 00:20:23.167 "uuid": "5bc267fd-f2cb-45b0-8a60-f3a97ed7fb35" 00:20:23.167 } 00:20:23.167 ] 00:20:23.167 } 00:20:23.167 ] 00:20:23.167 00:04:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.167 00:04:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:20:23.167 [2024-11-19 00:04:29.656208] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:20:23.167 [2024-11-19 00:04:29.656355] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79449 ] 00:20:23.167 [2024-11-19 00:04:29.845227] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:20:23.167 [2024-11-19 00:04:29.845370] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:20:23.167 [2024-11-19 00:04:29.845384] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:20:23.167 [2024-11-19 00:04:29.845423] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:20:23.167 [2024-11-19 00:04:29.845438] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:20:23.167 [2024-11-19 00:04:29.845882] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:20:23.167 [2024-11-19 00:04:29.845968] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x61500000f080 0 00:20:23.429 [2024-11-19 00:04:29.861684] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:20:23.429 [2024-11-19 00:04:29.861736] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:20:23.429 [2024-11-19 00:04:29.861747] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:20:23.429 [2024-11-19 00:04:29.861754] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:20:23.429 [2024-11-19 00:04:29.861839] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.429 [2024-11-19 00:04:29.861856] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.429 [2024-11-19 00:04:29.861864] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:23.429 [2024-11-19 00:04:29.861888] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:20:23.429 [2024-11-19 00:04:29.861931] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:23.429 [2024-11-19 00:04:29.872699] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.429 [2024-11-19 00:04:29.872731] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.429 [2024-11-19 00:04:29.872756] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.429 [2024-11-19 00:04:29.872764] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:23.429 [2024-11-19 00:04:29.872787] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:23.429 [2024-11-19 00:04:29.872803] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:20:23.429 [2024-11-19 00:04:29.872813] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:20:23.429 [2024-11-19 00:04:29.872836] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.429 [2024-11-19 00:04:29.872845] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.429 [2024-11-19 00:04:29.872853] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:23.429 [2024-11-19 00:04:29.872869] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.429 [2024-11-19 00:04:29.872905] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:23.429 [2024-11-19 00:04:29.873011] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.429 [2024-11-19 00:04:29.873027] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.429 [2024-11-19 00:04:29.873035] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.429 [2024-11-19 00:04:29.873042] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:23.429 [2024-11-19 00:04:29.873054] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:20:23.429 [2024-11-19 00:04:29.873067] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:20:23.429 [2024-11-19 00:04:29.873097] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.429 [2024-11-19 00:04:29.873105] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.429 [2024-11-19 00:04:29.873112] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:23.429 [2024-11-19 00:04:29.873130] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.429 [2024-11-19 00:04:29.873163] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:23.429 [2024-11-19 00:04:29.873238] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.429 [2024-11-19 00:04:29.873251] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.429 [2024-11-19 00:04:29.873257] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.429 [2024-11-19 00:04:29.873264] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:23.429 [2024-11-19 00:04:29.873275] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:20:23.429 [2024-11-19 00:04:29.873291] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:20:23.429 [2024-11-19 00:04:29.873309] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.429 [2024-11-19 00:04:29.873318] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.429 [2024-11-19 00:04:29.873325] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:23.429 [2024-11-19 00:04:29.873348] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.429 [2024-11-19 00:04:29.873381] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:23.429 [2024-11-19 00:04:29.873468] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.429 [2024-11-19 00:04:29.873485] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.429 [2024-11-19 00:04:29.873492] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.429 [2024-11-19 00:04:29.873498] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:23.429 [2024-11-19 00:04:29.873509] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:23.429 [2024-11-19 00:04:29.873527] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.429 [2024-11-19 00:04:29.873540] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.429 [2024-11-19 00:04:29.873547] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:23.429 [2024-11-19 00:04:29.873560] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.429 [2024-11-19 00:04:29.873588] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:23.429 [2024-11-19 00:04:29.873677] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.429 [2024-11-19 00:04:29.873691] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.429 [2024-11-19 00:04:29.873697] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.429 [2024-11-19 00:04:29.873704] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:23.429 [2024-11-19 00:04:29.873713] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:20:23.429 [2024-11-19 00:04:29.873723] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:20:23.429 [2024-11-19 00:04:29.873738] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:23.429 [2024-11-19 00:04:29.873847] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:20:23.429 [2024-11-19 00:04:29.873856] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:23.429 [2024-11-19 00:04:29.873880] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.429 [2024-11-19 00:04:29.873890] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.429 [2024-11-19 00:04:29.873900] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:23.429 [2024-11-19 00:04:29.873914] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.429 [2024-11-19 00:04:29.873945] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:23.429 [2024-11-19 00:04:29.874030] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.429 [2024-11-19 00:04:29.874052] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.429 [2024-11-19 00:04:29.874059] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.429 [2024-11-19 00:04:29.874066] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:23.429 [2024-11-19 00:04:29.874076] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:23.429 [2024-11-19 00:04:29.874095] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.429 [2024-11-19 00:04:29.874104] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.429 [2024-11-19 00:04:29.874111] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:23.429 [2024-11-19 00:04:29.874124] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.429 [2024-11-19 00:04:29.874151] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:23.429 [2024-11-19 00:04:29.874220] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.430 [2024-11-19 00:04:29.874233] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.430 [2024-11-19 00:04:29.874239] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.430 [2024-11-19 00:04:29.874245] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:23.430 [2024-11-19 00:04:29.874254] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:23.430 [2024-11-19 00:04:29.874269] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:20:23.430 [2024-11-19 00:04:29.874295] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:20:23.430 [2024-11-19 00:04:29.874309] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:20:23.430 [2024-11-19 00:04:29.874331] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.430 [2024-11-19 00:04:29.874340] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:23.430 [2024-11-19 00:04:29.874355] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.430 [2024-11-19 00:04:29.874385] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:23.430 [2024-11-19 00:04:29.874499] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:23.430 [2024-11-19 00:04:29.874515] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:23.430 [2024-11-19 00:04:29.874522] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:23.430 [2024-11-19 00:04:29.874529] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=0 00:20:23.430 [2024-11-19 00:04:29.874538] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:20:23.430 [2024-11-19 00:04:29.874547] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.430 [2024-11-19 00:04:29.874564] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:23.430 [2024-11-19 00:04:29.874572] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:23.430 [2024-11-19 00:04:29.874588] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.430 [2024-11-19 00:04:29.874630] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.430 [2024-11-19 00:04:29.874639] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.430 [2024-11-19 00:04:29.874646] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:23.430 [2024-11-19 00:04:29.874665] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:20:23.430 [2024-11-19 00:04:29.874691] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:20:23.430 [2024-11-19 00:04:29.874700] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:20:23.430 [2024-11-19 00:04:29.874709] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:20:23.430 [2024-11-19 00:04:29.874718] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:20:23.430 [2024-11-19 00:04:29.874728] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:20:23.430 [2024-11-19 00:04:29.874744] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:20:23.430 [2024-11-19 00:04:29.874763] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.430 [2024-11-19 00:04:29.874772] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.430 [2024-11-19 00:04:29.874780] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:23.430 [2024-11-19 00:04:29.874798] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:23.430 [2024-11-19 00:04:29.874833] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:23.430 [2024-11-19 00:04:29.874922] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.430 [2024-11-19 00:04:29.874935] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.430 [2024-11-19 00:04:29.874948] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.430 [2024-11-19 00:04:29.874956] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:23.430 [2024-11-19 00:04:29.874988] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.430 [2024-11-19 00:04:29.874996] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.430 [2024-11-19 00:04:29.875018] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:23.430 [2024-11-19 00:04:29.875037] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:23.430 [2024-11-19 00:04:29.875048] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.430 [2024-11-19 00:04:29.875054] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.430 [2024-11-19 00:04:29.875060] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x61500000f080) 00:20:23.430 [2024-11-19 00:04:29.875070] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:23.430 [2024-11-19 00:04:29.875079] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.430 [2024-11-19 00:04:29.875085] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.430 [2024-11-19 00:04:29.875096] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x61500000f080) 00:20:23.430 [2024-11-19 00:04:29.875106] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:23.430 [2024-11-19 00:04:29.875116] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.430 [2024-11-19 00:04:29.875122] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.430 [2024-11-19 00:04:29.875128] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:23.430 [2024-11-19 00:04:29.875137] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:23.430 [2024-11-19 00:04:29.875146] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:20:23.430 [2024-11-19 00:04:29.875170] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:23.430 [2024-11-19 00:04:29.875182] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.430 [2024-11-19 00:04:29.875190] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:20:23.430 [2024-11-19 00:04:29.875203] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.430 [2024-11-19 00:04:29.875236] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:23.430 [2024-11-19 00:04:29.875252] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b280, cid 1, qid 0 00:20:23.430 [2024-11-19 00:04:29.875261] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b400, cid 2, qid 0 00:20:23.430 [2024-11-19 00:04:29.875268] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:23.430 [2024-11-19 00:04:29.875275] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:20:23.430 [2024-11-19 00:04:29.875388] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.430 [2024-11-19 00:04:29.875400] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.430 [2024-11-19 00:04:29.875406] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.430 [2024-11-19 00:04:29.875413] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:20:23.430 [2024-11-19 00:04:29.875423] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:20:23.430 [2024-11-19 00:04:29.875432] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:20:23.430 [2024-11-19 00:04:29.875454] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.430 [2024-11-19 00:04:29.875464] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:20:23.430 [2024-11-19 00:04:29.875478] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.430 [2024-11-19 00:04:29.875511] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:20:23.430 [2024-11-19 00:04:29.875604] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:23.430 [2024-11-19 00:04:29.875616] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:23.430 [2024-11-19 00:04:29.875623] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:23.430 [2024-11-19 00:04:29.875633] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=4 00:20:23.430 [2024-11-19 00:04:29.875642] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:20:23.430 [2024-11-19 00:04:29.875650] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.430 [2024-11-19 00:04:29.875677] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:23.430 [2024-11-19 00:04:29.875687] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:23.430 [2024-11-19 00:04:29.875700] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.430 [2024-11-19 00:04:29.875710] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.430 [2024-11-19 00:04:29.875715] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.430 [2024-11-19 00:04:29.875727] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:20:23.430 [2024-11-19 00:04:29.875756] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:20:23.430 [2024-11-19 00:04:29.875815] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.430 [2024-11-19 00:04:29.875834] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:20:23.430 [2024-11-19 00:04:29.875849] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.430 [2024-11-19 00:04:29.875862] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.430 [2024-11-19 00:04:29.875869] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.430 [2024-11-19 00:04:29.875876] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500000f080) 00:20:23.430 [2024-11-19 00:04:29.875891] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:23.430 [2024-11-19 00:04:29.875926] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:20:23.431 [2024-11-19 00:04:29.875947] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:20:23.431 [2024-11-19 00:04:29.876188] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:23.431 [2024-11-19 00:04:29.876237] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:23.431 [2024-11-19 00:04:29.876245] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:23.431 [2024-11-19 00:04:29.876253] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=1024, cccid=4 00:20:23.431 [2024-11-19 00:04:29.876262] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=1024 00:20:23.431 [2024-11-19 00:04:29.876274] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.431 [2024-11-19 00:04:29.876288] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:23.431 [2024-11-19 00:04:29.876295] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:23.431 [2024-11-19 00:04:29.876305] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.431 [2024-11-19 00:04:29.876315] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.431 [2024-11-19 00:04:29.876321] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.431 [2024-11-19 00:04:29.876328] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500000f080 00:20:23.431 [2024-11-19 00:04:29.876357] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.431 [2024-11-19 00:04:29.876370] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.431 [2024-11-19 00:04:29.876375] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.431 [2024-11-19 00:04:29.876386] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:20:23.431 [2024-11-19 00:04:29.876421] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.431 [2024-11-19 00:04:29.876439] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:20:23.431 [2024-11-19 00:04:29.876454] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.431 [2024-11-19 00:04:29.876495] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:20:23.431 [2024-11-19 00:04:29.880668] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:23.431 [2024-11-19 00:04:29.880713] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:23.431 [2024-11-19 00:04:29.880721] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:23.431 [2024-11-19 00:04:29.880729] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=3072, cccid=4 00:20:23.431 [2024-11-19 00:04:29.880737] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=3072 00:20:23.431 [2024-11-19 00:04:29.880750] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.431 [2024-11-19 00:04:29.880765] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:23.431 [2024-11-19 00:04:29.880772] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:23.431 [2024-11-19 00:04:29.880782] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.431 [2024-11-19 00:04:29.880791] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.431 [2024-11-19 00:04:29.880798] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.431 [2024-11-19 00:04:29.880805] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:20:23.431 [2024-11-19 00:04:29.880835] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.431 [2024-11-19 00:04:29.880846] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:20:23.431 [2024-11-19 00:04:29.880862] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.431 [2024-11-19 00:04:29.880912] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:20:23.431 [2024-11-19 00:04:29.881044] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:23.431 [2024-11-19 00:04:29.881057] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:23.431 [2024-11-19 00:04:29.881063] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:23.431 [2024-11-19 00:04:29.881070] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=8, cccid=4 00:20:23.431 [2024-11-19 00:04:29.881078] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=8 00:20:23.431 [2024-11-19 00:04:29.881086] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.431 [2024-11-19 00:04:29.881097] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:23.431 [2024-11-19 00:04:29.881104] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:23.431 [2024-11-19 00:04:29.881133] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.431 [2024-11-19 00:04:29.881146] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.431 [2024-11-19 00:04:29.881152] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.431 [2024-11-19 00:04:29.881160] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:20:23.431 ===================================================== 00:20:23.431 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2014-08.org.nvmexpress.discovery 00:20:23.431 ===================================================== 00:20:23.431 Controller Capabilities/Features 00:20:23.431 ================================ 00:20:23.431 Vendor ID: 0000 00:20:23.431 Subsystem Vendor ID: 0000 00:20:23.431 Serial Number: .................... 00:20:23.431 Model Number: ........................................ 00:20:23.431 Firmware Version: 25.01 00:20:23.431 Recommended Arb Burst: 0 00:20:23.431 IEEE OUI Identifier: 00 00 00 00:20:23.431 Multi-path I/O 00:20:23.431 May have multiple subsystem ports: No 00:20:23.431 May have multiple controllers: No 00:20:23.431 Associated with SR-IOV VF: No 00:20:23.431 Max Data Transfer Size: 131072 00:20:23.431 Max Number of Namespaces: 0 00:20:23.431 Max Number of I/O Queues: 1024 00:20:23.431 NVMe Specification Version (VS): 1.3 00:20:23.431 NVMe Specification Version (Identify): 1.3 00:20:23.431 Maximum Queue Entries: 128 00:20:23.431 Contiguous Queues Required: Yes 00:20:23.431 Arbitration Mechanisms Supported 00:20:23.431 Weighted Round Robin: Not Supported 00:20:23.431 Vendor Specific: Not Supported 00:20:23.431 Reset Timeout: 15000 ms 00:20:23.431 Doorbell Stride: 4 bytes 00:20:23.431 NVM Subsystem Reset: Not Supported 00:20:23.431 Command Sets Supported 00:20:23.431 NVM Command Set: Supported 00:20:23.431 Boot Partition: Not Supported 00:20:23.431 Memory Page Size Minimum: 4096 bytes 00:20:23.431 Memory Page Size Maximum: 4096 bytes 00:20:23.431 Persistent Memory Region: Not Supported 00:20:23.431 Optional Asynchronous Events Supported 00:20:23.431 Namespace Attribute Notices: Not Supported 00:20:23.431 Firmware Activation Notices: Not Supported 00:20:23.431 ANA Change Notices: Not Supported 00:20:23.431 PLE Aggregate Log Change Notices: Not Supported 00:20:23.431 LBA Status Info Alert Notices: Not Supported 00:20:23.431 EGE Aggregate Log Change Notices: Not Supported 00:20:23.431 Normal NVM Subsystem Shutdown event: Not Supported 00:20:23.431 Zone Descriptor Change Notices: Not Supported 00:20:23.431 Discovery Log Change Notices: Supported 00:20:23.431 Controller Attributes 00:20:23.431 128-bit Host Identifier: Not Supported 00:20:23.431 Non-Operational Permissive Mode: Not Supported 00:20:23.431 NVM Sets: Not Supported 00:20:23.431 Read Recovery Levels: Not Supported 00:20:23.431 Endurance Groups: Not Supported 00:20:23.431 Predictable Latency Mode: Not Supported 00:20:23.431 Traffic Based Keep ALive: Not Supported 00:20:23.431 Namespace Granularity: Not Supported 00:20:23.431 SQ Associations: Not Supported 00:20:23.431 UUID List: Not Supported 00:20:23.431 Multi-Domain Subsystem: Not Supported 00:20:23.431 Fixed Capacity Management: Not Supported 00:20:23.431 Variable Capacity Management: Not Supported 00:20:23.431 Delete Endurance Group: Not Supported 00:20:23.431 Delete NVM Set: Not Supported 00:20:23.431 Extended LBA Formats Supported: Not Supported 00:20:23.431 Flexible Data Placement Supported: Not Supported 00:20:23.431 00:20:23.431 Controller Memory Buffer Support 00:20:23.431 ================================ 00:20:23.431 Supported: No 00:20:23.431 00:20:23.431 Persistent Memory Region Support 00:20:23.431 ================================ 00:20:23.431 Supported: No 00:20:23.431 00:20:23.431 Admin Command Set Attributes 00:20:23.431 ============================ 00:20:23.431 Security Send/Receive: Not Supported 00:20:23.431 Format NVM: Not Supported 00:20:23.431 Firmware Activate/Download: Not Supported 00:20:23.431 Namespace Management: Not Supported 00:20:23.431 Device Self-Test: Not Supported 00:20:23.431 Directives: Not Supported 00:20:23.431 NVMe-MI: Not Supported 00:20:23.431 Virtualization Management: Not Supported 00:20:23.431 Doorbell Buffer Config: Not Supported 00:20:23.431 Get LBA Status Capability: Not Supported 00:20:23.431 Command & Feature Lockdown Capability: Not Supported 00:20:23.431 Abort Command Limit: 1 00:20:23.431 Async Event Request Limit: 4 00:20:23.431 Number of Firmware Slots: N/A 00:20:23.431 Firmware Slot 1 Read-Only: N/A 00:20:23.431 Firmware Activation Without Reset: N/A 00:20:23.431 Multiple Update Detection Support: N/A 00:20:23.431 Firmware Update Granularity: No Information Provided 00:20:23.431 Per-Namespace SMART Log: No 00:20:23.431 Asymmetric Namespace Access Log Page: Not Supported 00:20:23.431 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:20:23.431 Command Effects Log Page: Not Supported 00:20:23.432 Get Log Page Extended Data: Supported 00:20:23.432 Telemetry Log Pages: Not Supported 00:20:23.432 Persistent Event Log Pages: Not Supported 00:20:23.432 Supported Log Pages Log Page: May Support 00:20:23.432 Commands Supported & Effects Log Page: Not Supported 00:20:23.432 Feature Identifiers & Effects Log Page:May Support 00:20:23.432 NVMe-MI Commands & Effects Log Page: May Support 00:20:23.432 Data Area 4 for Telemetry Log: Not Supported 00:20:23.432 Error Log Page Entries Supported: 128 00:20:23.432 Keep Alive: Not Supported 00:20:23.432 00:20:23.432 NVM Command Set Attributes 00:20:23.432 ========================== 00:20:23.432 Submission Queue Entry Size 00:20:23.432 Max: 1 00:20:23.432 Min: 1 00:20:23.432 Completion Queue Entry Size 00:20:23.432 Max: 1 00:20:23.432 Min: 1 00:20:23.432 Number of Namespaces: 0 00:20:23.432 Compare Command: Not Supported 00:20:23.432 Write Uncorrectable Command: Not Supported 00:20:23.432 Dataset Management Command: Not Supported 00:20:23.432 Write Zeroes Command: Not Supported 00:20:23.432 Set Features Save Field: Not Supported 00:20:23.432 Reservations: Not Supported 00:20:23.432 Timestamp: Not Supported 00:20:23.432 Copy: Not Supported 00:20:23.432 Volatile Write Cache: Not Present 00:20:23.432 Atomic Write Unit (Normal): 1 00:20:23.432 Atomic Write Unit (PFail): 1 00:20:23.432 Atomic Compare & Write Unit: 1 00:20:23.432 Fused Compare & Write: Supported 00:20:23.432 Scatter-Gather List 00:20:23.432 SGL Command Set: Supported 00:20:23.432 SGL Keyed: Supported 00:20:23.432 SGL Bit Bucket Descriptor: Not Supported 00:20:23.432 SGL Metadata Pointer: Not Supported 00:20:23.432 Oversized SGL: Not Supported 00:20:23.432 SGL Metadata Address: Not Supported 00:20:23.432 SGL Offset: Supported 00:20:23.432 Transport SGL Data Block: Not Supported 00:20:23.432 Replay Protected Memory Block: Not Supported 00:20:23.432 00:20:23.432 Firmware Slot Information 00:20:23.432 ========================= 00:20:23.432 Active slot: 0 00:20:23.432 00:20:23.432 00:20:23.432 Error Log 00:20:23.432 ========= 00:20:23.432 00:20:23.432 Active Namespaces 00:20:23.432 ================= 00:20:23.432 Discovery Log Page 00:20:23.432 ================== 00:20:23.432 Generation Counter: 2 00:20:23.432 Number of Records: 2 00:20:23.432 Record Format: 0 00:20:23.432 00:20:23.432 Discovery Log Entry 0 00:20:23.432 ---------------------- 00:20:23.432 Transport Type: 3 (TCP) 00:20:23.432 Address Family: 1 (IPv4) 00:20:23.432 Subsystem Type: 3 (Current Discovery Subsystem) 00:20:23.432 Entry Flags: 00:20:23.432 Duplicate Returned Information: 1 00:20:23.432 Explicit Persistent Connection Support for Discovery: 1 00:20:23.432 Transport Requirements: 00:20:23.432 Secure Channel: Not Required 00:20:23.432 Port ID: 0 (0x0000) 00:20:23.432 Controller ID: 65535 (0xffff) 00:20:23.432 Admin Max SQ Size: 128 00:20:23.432 Transport Service Identifier: 4420 00:20:23.432 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:20:23.432 Transport Address: 10.0.0.3 00:20:23.432 Discovery Log Entry 1 00:20:23.432 ---------------------- 00:20:23.432 Transport Type: 3 (TCP) 00:20:23.432 Address Family: 1 (IPv4) 00:20:23.432 Subsystem Type: 2 (NVM Subsystem) 00:20:23.432 Entry Flags: 00:20:23.432 Duplicate Returned Information: 0 00:20:23.432 Explicit Persistent Connection Support for Discovery: 0 00:20:23.432 Transport Requirements: 00:20:23.432 Secure Channel: Not Required 00:20:23.432 Port ID: 0 (0x0000) 00:20:23.432 Controller ID: 65535 (0xffff) 00:20:23.432 Admin Max SQ Size: 128 00:20:23.432 Transport Service Identifier: 4420 00:20:23.432 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:20:23.432 Transport Address: 10.0.0.3 [2024-11-19 00:04:29.881313] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:20:23.432 [2024-11-19 00:04:29.881335] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:23.432 [2024-11-19 00:04:29.881350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.432 [2024-11-19 00:04:29.881360] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b280) on tqpair=0x61500000f080 00:20:23.432 [2024-11-19 00:04:29.881370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.432 [2024-11-19 00:04:29.881378] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b400) on tqpair=0x61500000f080 00:20:23.432 [2024-11-19 00:04:29.881387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.432 [2024-11-19 00:04:29.881395] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:23.432 [2024-11-19 00:04:29.881410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.432 [2024-11-19 00:04:29.881450] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.432 [2024-11-19 00:04:29.881460] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.432 [2024-11-19 00:04:29.881467] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:23.432 [2024-11-19 00:04:29.881482] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.432 [2024-11-19 00:04:29.881515] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:23.432 [2024-11-19 00:04:29.881581] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.432 [2024-11-19 00:04:29.881630] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.432 [2024-11-19 00:04:29.881659] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.432 [2024-11-19 00:04:29.881670] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:23.432 [2024-11-19 00:04:29.881685] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.432 [2024-11-19 00:04:29.881699] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.432 [2024-11-19 00:04:29.881706] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:23.432 [2024-11-19 00:04:29.881722] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.432 [2024-11-19 00:04:29.881762] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:23.432 [2024-11-19 00:04:29.881863] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.432 [2024-11-19 00:04:29.881876] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.432 [2024-11-19 00:04:29.881883] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.432 [2024-11-19 00:04:29.881890] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:23.432 [2024-11-19 00:04:29.881900] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:20:23.432 [2024-11-19 00:04:29.881909] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:20:23.432 [2024-11-19 00:04:29.881928] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.432 [2024-11-19 00:04:29.881940] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.432 [2024-11-19 00:04:29.881951] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:23.432 [2024-11-19 00:04:29.881967] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.432 [2024-11-19 00:04:29.881997] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:23.432 [2024-11-19 00:04:29.882063] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.432 [2024-11-19 00:04:29.882075] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.432 [2024-11-19 00:04:29.882086] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.433 [2024-11-19 00:04:29.882094] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:23.433 [2024-11-19 00:04:29.882113] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.433 [2024-11-19 00:04:29.882122] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.433 [2024-11-19 00:04:29.882128] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:23.433 [2024-11-19 00:04:29.882141] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.433 [2024-11-19 00:04:29.882168] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:23.433 [2024-11-19 00:04:29.882238] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.433 [2024-11-19 00:04:29.882250] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.433 [2024-11-19 00:04:29.882257] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.433 [2024-11-19 00:04:29.882264] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:23.433 [2024-11-19 00:04:29.882282] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.433 [2024-11-19 00:04:29.882290] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.433 [2024-11-19 00:04:29.882297] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:23.433 [2024-11-19 00:04:29.882309] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.433 [2024-11-19 00:04:29.882336] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:23.433 [2024-11-19 00:04:29.882410] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.433 [2024-11-19 00:04:29.882423] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.433 [2024-11-19 00:04:29.882429] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.433 [2024-11-19 00:04:29.882436] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:23.433 [2024-11-19 00:04:29.882454] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.433 [2024-11-19 00:04:29.882462] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.433 [2024-11-19 00:04:29.882468] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:23.433 [2024-11-19 00:04:29.882481] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.433 [2024-11-19 00:04:29.882512] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:23.433 [2024-11-19 00:04:29.882575] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.433 [2024-11-19 00:04:29.882587] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.433 [2024-11-19 00:04:29.882593] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.433 [2024-11-19 00:04:29.882616] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:23.433 [2024-11-19 00:04:29.882636] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.433 [2024-11-19 00:04:29.882653] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.433 [2024-11-19 00:04:29.882661] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:23.433 [2024-11-19 00:04:29.882674] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.433 [2024-11-19 00:04:29.882703] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:23.433 [2024-11-19 00:04:29.882787] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.433 [2024-11-19 00:04:29.882799] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.433 [2024-11-19 00:04:29.882809] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.433 [2024-11-19 00:04:29.882817] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:23.433 [2024-11-19 00:04:29.882835] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.433 [2024-11-19 00:04:29.882843] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.433 [2024-11-19 00:04:29.882849] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:23.433 [2024-11-19 00:04:29.882861] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.433 [2024-11-19 00:04:29.882887] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:23.433 [2024-11-19 00:04:29.882961] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.433 [2024-11-19 00:04:29.882973] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.433 [2024-11-19 00:04:29.882979] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.433 [2024-11-19 00:04:29.882986] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:23.433 [2024-11-19 00:04:29.883004] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.433 [2024-11-19 00:04:29.883012] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.433 [2024-11-19 00:04:29.883018] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:23.433 [2024-11-19 00:04:29.883030] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.433 [2024-11-19 00:04:29.883056] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:23.433 [2024-11-19 00:04:29.883138] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.433 [2024-11-19 00:04:29.883150] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.433 [2024-11-19 00:04:29.883157] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.433 [2024-11-19 00:04:29.883164] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:23.433 [2024-11-19 00:04:29.883181] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.433 [2024-11-19 00:04:29.883190] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.433 [2024-11-19 00:04:29.883196] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:23.433 [2024-11-19 00:04:29.883213] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.433 [2024-11-19 00:04:29.883240] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:23.433 [2024-11-19 00:04:29.883312] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.433 [2024-11-19 00:04:29.883324] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.433 [2024-11-19 00:04:29.883330] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.433 [2024-11-19 00:04:29.883337] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:23.433 [2024-11-19 00:04:29.883359] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.433 [2024-11-19 00:04:29.883368] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.433 [2024-11-19 00:04:29.883375] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:23.433 [2024-11-19 00:04:29.883390] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.433 [2024-11-19 00:04:29.883432] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:23.433 [2024-11-19 00:04:29.883501] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.433 [2024-11-19 00:04:29.883517] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.433 [2024-11-19 00:04:29.883524] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.433 [2024-11-19 00:04:29.883530] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:23.433 [2024-11-19 00:04:29.883548] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.433 [2024-11-19 00:04:29.883556] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.433 [2024-11-19 00:04:29.883562] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:23.433 [2024-11-19 00:04:29.883574] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.433 [2024-11-19 00:04:29.883600] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:23.433 [2024-11-19 00:04:29.883682] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.433 [2024-11-19 00:04:29.883696] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.433 [2024-11-19 00:04:29.883702] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.433 [2024-11-19 00:04:29.883709] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:23.433 [2024-11-19 00:04:29.883727] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.433 [2024-11-19 00:04:29.883735] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.433 [2024-11-19 00:04:29.883741] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:23.433 [2024-11-19 00:04:29.883754] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.433 [2024-11-19 00:04:29.883784] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:23.433 [2024-11-19 00:04:29.883846] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.433 [2024-11-19 00:04:29.883859] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.433 [2024-11-19 00:04:29.883865] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.433 [2024-11-19 00:04:29.883872] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:23.433 [2024-11-19 00:04:29.883889] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.433 [2024-11-19 00:04:29.883897] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.433 [2024-11-19 00:04:29.883903] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:23.433 [2024-11-19 00:04:29.883916] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.433 [2024-11-19 00:04:29.883942] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:23.433 [2024-11-19 00:04:29.884006] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.434 [2024-11-19 00:04:29.884018] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.434 [2024-11-19 00:04:29.884025] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.434 [2024-11-19 00:04:29.884031] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:23.434 [2024-11-19 00:04:29.884048] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.434 [2024-11-19 00:04:29.884056] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.434 [2024-11-19 00:04:29.884066] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:23.434 [2024-11-19 00:04:29.884079] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.434 [2024-11-19 00:04:29.884105] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:23.434 [2024-11-19 00:04:29.884176] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.434 [2024-11-19 00:04:29.884187] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.434 [2024-11-19 00:04:29.884205] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.434 [2024-11-19 00:04:29.884235] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:23.434 [2024-11-19 00:04:29.884257] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.434 [2024-11-19 00:04:29.884265] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.434 [2024-11-19 00:04:29.884272] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:23.434 [2024-11-19 00:04:29.884285] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.434 [2024-11-19 00:04:29.884313] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:23.434 [2024-11-19 00:04:29.884394] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.434 [2024-11-19 00:04:29.884406] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.434 [2024-11-19 00:04:29.884413] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.434 [2024-11-19 00:04:29.884420] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:23.434 [2024-11-19 00:04:29.884438] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.434 [2024-11-19 00:04:29.884446] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.434 [2024-11-19 00:04:29.884452] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:23.434 [2024-11-19 00:04:29.884465] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.434 [2024-11-19 00:04:29.884491] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:23.434 [2024-11-19 00:04:29.884561] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.434 [2024-11-19 00:04:29.884573] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.434 [2024-11-19 00:04:29.884594] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.434 [2024-11-19 00:04:29.884601] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:23.434 [2024-11-19 00:04:29.884630] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.434 [2024-11-19 00:04:29.888697] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.434 [2024-11-19 00:04:29.888712] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:23.434 [2024-11-19 00:04:29.888733] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.434 [2024-11-19 00:04:29.888770] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:23.434 [2024-11-19 00:04:29.888843] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.434 [2024-11-19 00:04:29.888855] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.434 [2024-11-19 00:04:29.888862] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.434 [2024-11-19 00:04:29.888868] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:23.434 [2024-11-19 00:04:29.888888] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 6 milliseconds 00:20:23.434 00:20:23.434 00:04:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:20:23.434 [2024-11-19 00:04:29.993908] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:20:23.434 [2024-11-19 00:04:29.994035] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79452 ] 00:20:23.695 [2024-11-19 00:04:30.172879] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:20:23.695 [2024-11-19 00:04:30.173034] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:20:23.695 [2024-11-19 00:04:30.173053] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:20:23.695 [2024-11-19 00:04:30.173079] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:20:23.695 [2024-11-19 00:04:30.173096] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:20:23.695 [2024-11-19 00:04:30.173522] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:20:23.695 [2024-11-19 00:04:30.173597] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x61500000f080 0 00:20:23.695 [2024-11-19 00:04:30.184692] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:20:23.695 [2024-11-19 00:04:30.184739] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:20:23.695 [2024-11-19 00:04:30.184750] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:20:23.695 [2024-11-19 00:04:30.184757] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:20:23.695 [2024-11-19 00:04:30.184840] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.695 [2024-11-19 00:04:30.184856] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.695 [2024-11-19 00:04:30.184873] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:23.695 [2024-11-19 00:04:30.184896] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:20:23.695 [2024-11-19 00:04:30.184935] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:23.695 [2024-11-19 00:04:30.192731] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.695 [2024-11-19 00:04:30.192761] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.695 [2024-11-19 00:04:30.192787] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.695 [2024-11-19 00:04:30.192796] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:23.695 [2024-11-19 00:04:30.192824] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:23.695 [2024-11-19 00:04:30.192842] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:20:23.695 [2024-11-19 00:04:30.192853] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:20:23.695 [2024-11-19 00:04:30.192874] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.695 [2024-11-19 00:04:30.192883] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.695 [2024-11-19 00:04:30.192890] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:23.695 [2024-11-19 00:04:30.192906] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.695 [2024-11-19 00:04:30.192942] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:23.695 [2024-11-19 00:04:30.193048] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.695 [2024-11-19 00:04:30.193061] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.695 [2024-11-19 00:04:30.193072] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.695 [2024-11-19 00:04:30.193080] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:23.695 [2024-11-19 00:04:30.193092] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:20:23.695 [2024-11-19 00:04:30.193106] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:20:23.695 [2024-11-19 00:04:30.193119] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.695 [2024-11-19 00:04:30.193127] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.695 [2024-11-19 00:04:30.193135] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:23.695 [2024-11-19 00:04:30.193152] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.695 [2024-11-19 00:04:30.193185] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:23.695 [2024-11-19 00:04:30.193254] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.695 [2024-11-19 00:04:30.193267] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.695 [2024-11-19 00:04:30.193273] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.695 [2024-11-19 00:04:30.193283] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:23.695 [2024-11-19 00:04:30.193294] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:20:23.695 [2024-11-19 00:04:30.193312] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:20:23.695 [2024-11-19 00:04:30.193327] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.695 [2024-11-19 00:04:30.193336] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.695 [2024-11-19 00:04:30.193343] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:23.695 [2024-11-19 00:04:30.193357] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.695 [2024-11-19 00:04:30.193385] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:23.695 [2024-11-19 00:04:30.193451] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.695 [2024-11-19 00:04:30.193469] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.695 [2024-11-19 00:04:30.193476] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.695 [2024-11-19 00:04:30.193483] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:23.695 [2024-11-19 00:04:30.193493] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:23.695 [2024-11-19 00:04:30.193511] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.695 [2024-11-19 00:04:30.193519] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.695 [2024-11-19 00:04:30.193531] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:23.695 [2024-11-19 00:04:30.193545] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.695 [2024-11-19 00:04:30.193573] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:23.695 [2024-11-19 00:04:30.193677] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.695 [2024-11-19 00:04:30.193691] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.695 [2024-11-19 00:04:30.193698] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.695 [2024-11-19 00:04:30.193705] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:23.695 [2024-11-19 00:04:30.193715] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:20:23.695 [2024-11-19 00:04:30.193725] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:20:23.695 [2024-11-19 00:04:30.193740] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:23.695 [2024-11-19 00:04:30.193850] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:20:23.695 [2024-11-19 00:04:30.193859] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:23.695 [2024-11-19 00:04:30.193880] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.695 [2024-11-19 00:04:30.193889] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.695 [2024-11-19 00:04:30.193897] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:23.695 [2024-11-19 00:04:30.193914] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.695 [2024-11-19 00:04:30.193945] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:23.695 [2024-11-19 00:04:30.194033] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.695 [2024-11-19 00:04:30.194058] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.695 [2024-11-19 00:04:30.194066] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.695 [2024-11-19 00:04:30.194073] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:23.695 [2024-11-19 00:04:30.194084] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:23.695 [2024-11-19 00:04:30.194103] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.695 [2024-11-19 00:04:30.194112] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.695 [2024-11-19 00:04:30.194119] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:23.695 [2024-11-19 00:04:30.194133] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.695 [2024-11-19 00:04:30.194161] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:23.695 [2024-11-19 00:04:30.194233] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.695 [2024-11-19 00:04:30.194247] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.695 [2024-11-19 00:04:30.194253] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.695 [2024-11-19 00:04:30.194260] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:23.695 [2024-11-19 00:04:30.194269] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:23.695 [2024-11-19 00:04:30.194279] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:20:23.695 [2024-11-19 00:04:30.194306] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:20:23.695 [2024-11-19 00:04:30.194323] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:20:23.695 [2024-11-19 00:04:30.194343] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.695 [2024-11-19 00:04:30.194352] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:23.695 [2024-11-19 00:04:30.194367] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.695 [2024-11-19 00:04:30.194402] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:23.695 [2024-11-19 00:04:30.194529] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:23.695 [2024-11-19 00:04:30.194545] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:23.695 [2024-11-19 00:04:30.194552] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:23.695 [2024-11-19 00:04:30.194560] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=0 00:20:23.695 [2024-11-19 00:04:30.194569] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:20:23.695 [2024-11-19 00:04:30.194577] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.695 [2024-11-19 00:04:30.194591] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:23.695 [2024-11-19 00:04:30.194619] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:23.695 [2024-11-19 00:04:30.194674] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.695 [2024-11-19 00:04:30.194685] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.695 [2024-11-19 00:04:30.194695] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.695 [2024-11-19 00:04:30.194703] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:23.695 [2024-11-19 00:04:30.194723] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:20:23.695 [2024-11-19 00:04:30.194734] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:20:23.695 [2024-11-19 00:04:30.194742] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:20:23.695 [2024-11-19 00:04:30.194751] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:20:23.695 [2024-11-19 00:04:30.194760] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:20:23.695 [2024-11-19 00:04:30.194770] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:20:23.695 [2024-11-19 00:04:30.194786] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:20:23.695 [2024-11-19 00:04:30.194801] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.695 [2024-11-19 00:04:30.194810] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.695 [2024-11-19 00:04:30.194824] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:23.695 [2024-11-19 00:04:30.194840] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:23.696 [2024-11-19 00:04:30.194873] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:23.696 [2024-11-19 00:04:30.194949] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.696 [2024-11-19 00:04:30.194962] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.696 [2024-11-19 00:04:30.194968] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.696 [2024-11-19 00:04:30.194976] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:23.696 [2024-11-19 00:04:30.194994] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.696 [2024-11-19 00:04:30.195018] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.696 [2024-11-19 00:04:30.195044] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:23.696 [2024-11-19 00:04:30.195065] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:23.696 [2024-11-19 00:04:30.195077] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.696 [2024-11-19 00:04:30.195084] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.696 [2024-11-19 00:04:30.195090] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x61500000f080) 00:20:23.696 [2024-11-19 00:04:30.195100] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:23.696 [2024-11-19 00:04:30.195109] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.696 [2024-11-19 00:04:30.195116] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.696 [2024-11-19 00:04:30.195122] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x61500000f080) 00:20:23.696 [2024-11-19 00:04:30.195132] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:23.696 [2024-11-19 00:04:30.195141] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.696 [2024-11-19 00:04:30.195148] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.696 [2024-11-19 00:04:30.195156] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:23.696 [2024-11-19 00:04:30.195167] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:23.696 [2024-11-19 00:04:30.195176] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:20:23.696 [2024-11-19 00:04:30.195196] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:23.696 [2024-11-19 00:04:30.195209] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.696 [2024-11-19 00:04:30.195220] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:20:23.696 [2024-11-19 00:04:30.195232] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.696 [2024-11-19 00:04:30.195265] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:23.696 [2024-11-19 00:04:30.195278] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b280, cid 1, qid 0 00:20:23.696 [2024-11-19 00:04:30.195285] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b400, cid 2, qid 0 00:20:23.696 [2024-11-19 00:04:30.195293] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:23.696 [2024-11-19 00:04:30.195300] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:20:23.696 [2024-11-19 00:04:30.195414] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.696 [2024-11-19 00:04:30.195427] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.696 [2024-11-19 00:04:30.195433] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.696 [2024-11-19 00:04:30.195445] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:20:23.696 [2024-11-19 00:04:30.195456] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:20:23.696 [2024-11-19 00:04:30.195466] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:20:23.696 [2024-11-19 00:04:30.195483] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:20:23.696 [2024-11-19 00:04:30.195494] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:20:23.696 [2024-11-19 00:04:30.195505] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.696 [2024-11-19 00:04:30.195513] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.696 [2024-11-19 00:04:30.195521] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:20:23.696 [2024-11-19 00:04:30.195535] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:23.696 [2024-11-19 00:04:30.195566] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:20:23.696 [2024-11-19 00:04:30.195636] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.696 [2024-11-19 00:04:30.195648] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.696 [2024-11-19 00:04:30.195654] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.696 [2024-11-19 00:04:30.195681] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:20:23.696 [2024-11-19 00:04:30.195774] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:20:23.696 [2024-11-19 00:04:30.195801] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:20:23.696 [2024-11-19 00:04:30.195820] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.696 [2024-11-19 00:04:30.195829] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:20:23.696 [2024-11-19 00:04:30.195847] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.696 [2024-11-19 00:04:30.195878] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:20:23.696 [2024-11-19 00:04:30.195991] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:23.696 [2024-11-19 00:04:30.196007] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:23.696 [2024-11-19 00:04:30.196014] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:23.696 [2024-11-19 00:04:30.196021] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=4 00:20:23.696 [2024-11-19 00:04:30.196029] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:20:23.696 [2024-11-19 00:04:30.196037] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.696 [2024-11-19 00:04:30.196052] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:23.696 [2024-11-19 00:04:30.196060] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:23.696 [2024-11-19 00:04:30.196073] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.696 [2024-11-19 00:04:30.196083] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.696 [2024-11-19 00:04:30.196090] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.696 [2024-11-19 00:04:30.196096] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:20:23.696 [2024-11-19 00:04:30.196137] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:20:23.696 [2024-11-19 00:04:30.196160] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:20:23.696 [2024-11-19 00:04:30.196187] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:20:23.696 [2024-11-19 00:04:30.196235] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.696 [2024-11-19 00:04:30.196249] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:20:23.696 [2024-11-19 00:04:30.196266] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.696 [2024-11-19 00:04:30.196299] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:20:23.696 [2024-11-19 00:04:30.196421] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:23.696 [2024-11-19 00:04:30.196434] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:23.696 [2024-11-19 00:04:30.196441] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:23.696 [2024-11-19 00:04:30.196448] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=4 00:20:23.696 [2024-11-19 00:04:30.196456] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:20:23.696 [2024-11-19 00:04:30.196464] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.696 [2024-11-19 00:04:30.196480] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:23.696 [2024-11-19 00:04:30.196488] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:23.696 [2024-11-19 00:04:30.196504] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.696 [2024-11-19 00:04:30.196515] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.696 [2024-11-19 00:04:30.196522] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.696 [2024-11-19 00:04:30.196529] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:20:23.696 [2024-11-19 00:04:30.196581] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:20:23.696 [2024-11-19 00:04:30.196620] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:20:23.696 [2024-11-19 00:04:30.200683] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.696 [2024-11-19 00:04:30.200709] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:20:23.696 [2024-11-19 00:04:30.200743] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.696 [2024-11-19 00:04:30.200785] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:20:23.696 [2024-11-19 00:04:30.200878] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:23.696 [2024-11-19 00:04:30.200890] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:23.696 [2024-11-19 00:04:30.200897] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:23.696 [2024-11-19 00:04:30.200906] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=4 00:20:23.697 [2024-11-19 00:04:30.200914] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:20:23.697 [2024-11-19 00:04:30.200922] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.697 [2024-11-19 00:04:30.200933] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:23.697 [2024-11-19 00:04:30.200940] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:23.697 [2024-11-19 00:04:30.200984] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.697 [2024-11-19 00:04:30.200995] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.697 [2024-11-19 00:04:30.201001] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.697 [2024-11-19 00:04:30.201008] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:20:23.697 [2024-11-19 00:04:30.201040] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:20:23.697 [2024-11-19 00:04:30.201057] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:20:23.697 [2024-11-19 00:04:30.201078] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:20:23.697 [2024-11-19 00:04:30.201094] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:20:23.697 [2024-11-19 00:04:30.201103] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:20:23.697 [2024-11-19 00:04:30.201112] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:20:23.697 [2024-11-19 00:04:30.201122] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:20:23.697 [2024-11-19 00:04:30.201130] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:20:23.697 [2024-11-19 00:04:30.201139] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:20:23.697 [2024-11-19 00:04:30.201175] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.697 [2024-11-19 00:04:30.201185] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:20:23.697 [2024-11-19 00:04:30.201200] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.697 [2024-11-19 00:04:30.201212] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.697 [2024-11-19 00:04:30.201220] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.697 [2024-11-19 00:04:30.201227] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500000f080) 00:20:23.697 [2024-11-19 00:04:30.201245] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:23.697 [2024-11-19 00:04:30.201284] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:20:23.697 [2024-11-19 00:04:30.201298] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:20:23.697 [2024-11-19 00:04:30.201386] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.697 [2024-11-19 00:04:30.201399] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.697 [2024-11-19 00:04:30.201406] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.697 [2024-11-19 00:04:30.201414] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:20:23.697 [2024-11-19 00:04:30.201426] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.697 [2024-11-19 00:04:30.201436] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.697 [2024-11-19 00:04:30.201447] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.697 [2024-11-19 00:04:30.201454] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500000f080 00:20:23.697 [2024-11-19 00:04:30.201470] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.697 [2024-11-19 00:04:30.201484] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500000f080) 00:20:23.697 [2024-11-19 00:04:30.201498] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.697 [2024-11-19 00:04:30.201525] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:20:23.697 [2024-11-19 00:04:30.201610] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.697 [2024-11-19 00:04:30.201625] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.697 [2024-11-19 00:04:30.201648] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.697 [2024-11-19 00:04:30.201656] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500000f080 00:20:23.697 [2024-11-19 00:04:30.201675] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.697 [2024-11-19 00:04:30.201683] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500000f080) 00:20:23.697 [2024-11-19 00:04:30.201696] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.697 [2024-11-19 00:04:30.201726] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:20:23.697 [2024-11-19 00:04:30.201794] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.697 [2024-11-19 00:04:30.201806] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.697 [2024-11-19 00:04:30.201812] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.697 [2024-11-19 00:04:30.201819] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500000f080 00:20:23.697 [2024-11-19 00:04:30.201836] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.697 [2024-11-19 00:04:30.201844] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500000f080) 00:20:23.697 [2024-11-19 00:04:30.201860] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.697 [2024-11-19 00:04:30.201890] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:20:23.697 [2024-11-19 00:04:30.201960] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.697 [2024-11-19 00:04:30.201972] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.697 [2024-11-19 00:04:30.201978] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.697 [2024-11-19 00:04:30.201988] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500000f080 00:20:23.697 [2024-11-19 00:04:30.202020] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.697 [2024-11-19 00:04:30.202031] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500000f080) 00:20:23.697 [2024-11-19 00:04:30.202045] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.697 [2024-11-19 00:04:30.202060] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.697 [2024-11-19 00:04:30.202068] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:20:23.697 [2024-11-19 00:04:30.202084] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.697 [2024-11-19 00:04:30.202097] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.697 [2024-11-19 00:04:30.202105] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x61500000f080) 00:20:23.697 [2024-11-19 00:04:30.202119] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.697 [2024-11-19 00:04:30.202136] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.697 [2024-11-19 00:04:30.202144] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x61500000f080) 00:20:23.697 [2024-11-19 00:04:30.202156] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.697 [2024-11-19 00:04:30.202186] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:20:23.697 [2024-11-19 00:04:30.202199] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:20:23.697 [2024-11-19 00:04:30.202206] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001ba00, cid 6, qid 0 00:20:23.697 [2024-11-19 00:04:30.202214] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001bb80, cid 7, qid 0 00:20:23.697 [2024-11-19 00:04:30.202413] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:23.697 [2024-11-19 00:04:30.202442] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:23.697 [2024-11-19 00:04:30.202451] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:23.697 [2024-11-19 00:04:30.202464] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=8192, cccid=5 00:20:23.697 [2024-11-19 00:04:30.202472] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b880) on tqpair(0x61500000f080): expected_datao=0, payload_size=8192 00:20:23.697 [2024-11-19 00:04:30.202480] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.697 [2024-11-19 00:04:30.202511] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:23.697 [2024-11-19 00:04:30.202521] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:23.697 [2024-11-19 00:04:30.202531] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:23.697 [2024-11-19 00:04:30.202540] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:23.697 [2024-11-19 00:04:30.202549] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:23.697 [2024-11-19 00:04:30.202556] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=512, cccid=4 00:20:23.697 [2024-11-19 00:04:30.202564] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=512 00:20:23.697 [2024-11-19 00:04:30.202571] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.697 [2024-11-19 00:04:30.202581] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:23.697 [2024-11-19 00:04:30.202588] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:23.697 [2024-11-19 00:04:30.202626] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:23.697 [2024-11-19 00:04:30.202637] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:23.697 [2024-11-19 00:04:30.202644] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:23.697 [2024-11-19 00:04:30.202650] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=512, cccid=6 00:20:23.697 [2024-11-19 00:04:30.202658] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001ba00) on tqpair(0x61500000f080): expected_datao=0, payload_size=512 00:20:23.697 [2024-11-19 00:04:30.202665] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.697 [2024-11-19 00:04:30.202680] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:23.697 [2024-11-19 00:04:30.202688] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:23.697 [2024-11-19 00:04:30.202697] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:23.698 [2024-11-19 00:04:30.202706] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:23.698 [2024-11-19 00:04:30.202712] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:23.698 [2024-11-19 00:04:30.202719] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=7 00:20:23.698 [2024-11-19 00:04:30.202726] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001bb80) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:20:23.698 [2024-11-19 00:04:30.202733] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.698 [2024-11-19 00:04:30.202744] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:23.698 [2024-11-19 00:04:30.202750] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:23.698 [2024-11-19 00:04:30.202762] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.698 [2024-11-19 00:04:30.202771] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.698 [2024-11-19 00:04:30.202777] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.698 [2024-11-19 00:04:30.202785] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500000f080 00:20:23.698 [2024-11-19 00:04:30.202814] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.698 [2024-11-19 00:04:30.202825] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.698 [2024-11-19 00:04:30.202831] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.698 [2024-11-19 00:04:30.202838] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:20:23.698 [2024-11-19 00:04:30.202854] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.698 [2024-11-19 00:04:30.202864] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.698 [2024-11-19 00:04:30.202873] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.698 [2024-11-19 00:04:30.202880] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001ba00) on tqpair=0x61500000f080 00:20:23.698 [2024-11-19 00:04:30.202894] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.698 [2024-11-19 00:04:30.202904] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.698 [2024-11-19 00:04:30.202910] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.698 [2024-11-19 00:04:30.202917] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001bb80) on tqpair=0x61500000f080 00:20:23.698 ===================================================== 00:20:23.698 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:20:23.698 ===================================================== 00:20:23.698 Controller Capabilities/Features 00:20:23.698 ================================ 00:20:23.698 Vendor ID: 8086 00:20:23.698 Subsystem Vendor ID: 8086 00:20:23.698 Serial Number: SPDK00000000000001 00:20:23.698 Model Number: SPDK bdev Controller 00:20:23.698 Firmware Version: 25.01 00:20:23.698 Recommended Arb Burst: 6 00:20:23.698 IEEE OUI Identifier: e4 d2 5c 00:20:23.698 Multi-path I/O 00:20:23.698 May have multiple subsystem ports: Yes 00:20:23.698 May have multiple controllers: Yes 00:20:23.698 Associated with SR-IOV VF: No 00:20:23.698 Max Data Transfer Size: 131072 00:20:23.698 Max Number of Namespaces: 32 00:20:23.698 Max Number of I/O Queues: 127 00:20:23.698 NVMe Specification Version (VS): 1.3 00:20:23.698 NVMe Specification Version (Identify): 1.3 00:20:23.698 Maximum Queue Entries: 128 00:20:23.698 Contiguous Queues Required: Yes 00:20:23.698 Arbitration Mechanisms Supported 00:20:23.698 Weighted Round Robin: Not Supported 00:20:23.698 Vendor Specific: Not Supported 00:20:23.698 Reset Timeout: 15000 ms 00:20:23.698 Doorbell Stride: 4 bytes 00:20:23.698 NVM Subsystem Reset: Not Supported 00:20:23.698 Command Sets Supported 00:20:23.698 NVM Command Set: Supported 00:20:23.698 Boot Partition: Not Supported 00:20:23.698 Memory Page Size Minimum: 4096 bytes 00:20:23.698 Memory Page Size Maximum: 4096 bytes 00:20:23.698 Persistent Memory Region: Not Supported 00:20:23.698 Optional Asynchronous Events Supported 00:20:23.698 Namespace Attribute Notices: Supported 00:20:23.698 Firmware Activation Notices: Not Supported 00:20:23.698 ANA Change Notices: Not Supported 00:20:23.698 PLE Aggregate Log Change Notices: Not Supported 00:20:23.698 LBA Status Info Alert Notices: Not Supported 00:20:23.698 EGE Aggregate Log Change Notices: Not Supported 00:20:23.698 Normal NVM Subsystem Shutdown event: Not Supported 00:20:23.698 Zone Descriptor Change Notices: Not Supported 00:20:23.698 Discovery Log Change Notices: Not Supported 00:20:23.698 Controller Attributes 00:20:23.698 128-bit Host Identifier: Supported 00:20:23.698 Non-Operational Permissive Mode: Not Supported 00:20:23.698 NVM Sets: Not Supported 00:20:23.698 Read Recovery Levels: Not Supported 00:20:23.698 Endurance Groups: Not Supported 00:20:23.698 Predictable Latency Mode: Not Supported 00:20:23.698 Traffic Based Keep ALive: Not Supported 00:20:23.698 Namespace Granularity: Not Supported 00:20:23.698 SQ Associations: Not Supported 00:20:23.698 UUID List: Not Supported 00:20:23.698 Multi-Domain Subsystem: Not Supported 00:20:23.698 Fixed Capacity Management: Not Supported 00:20:23.698 Variable Capacity Management: Not Supported 00:20:23.698 Delete Endurance Group: Not Supported 00:20:23.698 Delete NVM Set: Not Supported 00:20:23.698 Extended LBA Formats Supported: Not Supported 00:20:23.698 Flexible Data Placement Supported: Not Supported 00:20:23.698 00:20:23.698 Controller Memory Buffer Support 00:20:23.698 ================================ 00:20:23.698 Supported: No 00:20:23.698 00:20:23.698 Persistent Memory Region Support 00:20:23.698 ================================ 00:20:23.698 Supported: No 00:20:23.698 00:20:23.698 Admin Command Set Attributes 00:20:23.698 ============================ 00:20:23.698 Security Send/Receive: Not Supported 00:20:23.698 Format NVM: Not Supported 00:20:23.698 Firmware Activate/Download: Not Supported 00:20:23.698 Namespace Management: Not Supported 00:20:23.698 Device Self-Test: Not Supported 00:20:23.698 Directives: Not Supported 00:20:23.698 NVMe-MI: Not Supported 00:20:23.698 Virtualization Management: Not Supported 00:20:23.698 Doorbell Buffer Config: Not Supported 00:20:23.698 Get LBA Status Capability: Not Supported 00:20:23.698 Command & Feature Lockdown Capability: Not Supported 00:20:23.698 Abort Command Limit: 4 00:20:23.698 Async Event Request Limit: 4 00:20:23.698 Number of Firmware Slots: N/A 00:20:23.698 Firmware Slot 1 Read-Only: N/A 00:20:23.698 Firmware Activation Without Reset: N/A 00:20:23.698 Multiple Update Detection Support: N/A 00:20:23.698 Firmware Update Granularity: No Information Provided 00:20:23.698 Per-Namespace SMART Log: No 00:20:23.698 Asymmetric Namespace Access Log Page: Not Supported 00:20:23.698 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:20:23.698 Command Effects Log Page: Supported 00:20:23.698 Get Log Page Extended Data: Supported 00:20:23.698 Telemetry Log Pages: Not Supported 00:20:23.698 Persistent Event Log Pages: Not Supported 00:20:23.698 Supported Log Pages Log Page: May Support 00:20:23.698 Commands Supported & Effects Log Page: Not Supported 00:20:23.698 Feature Identifiers & Effects Log Page:May Support 00:20:23.698 NVMe-MI Commands & Effects Log Page: May Support 00:20:23.698 Data Area 4 for Telemetry Log: Not Supported 00:20:23.698 Error Log Page Entries Supported: 128 00:20:23.698 Keep Alive: Supported 00:20:23.698 Keep Alive Granularity: 10000 ms 00:20:23.698 00:20:23.698 NVM Command Set Attributes 00:20:23.698 ========================== 00:20:23.698 Submission Queue Entry Size 00:20:23.698 Max: 64 00:20:23.698 Min: 64 00:20:23.698 Completion Queue Entry Size 00:20:23.698 Max: 16 00:20:23.698 Min: 16 00:20:23.698 Number of Namespaces: 32 00:20:23.698 Compare Command: Supported 00:20:23.698 Write Uncorrectable Command: Not Supported 00:20:23.698 Dataset Management Command: Supported 00:20:23.698 Write Zeroes Command: Supported 00:20:23.698 Set Features Save Field: Not Supported 00:20:23.698 Reservations: Supported 00:20:23.698 Timestamp: Not Supported 00:20:23.698 Copy: Supported 00:20:23.698 Volatile Write Cache: Present 00:20:23.698 Atomic Write Unit (Normal): 1 00:20:23.698 Atomic Write Unit (PFail): 1 00:20:23.698 Atomic Compare & Write Unit: 1 00:20:23.698 Fused Compare & Write: Supported 00:20:23.698 Scatter-Gather List 00:20:23.698 SGL Command Set: Supported 00:20:23.698 SGL Keyed: Supported 00:20:23.698 SGL Bit Bucket Descriptor: Not Supported 00:20:23.698 SGL Metadata Pointer: Not Supported 00:20:23.698 Oversized SGL: Not Supported 00:20:23.698 SGL Metadata Address: Not Supported 00:20:23.698 SGL Offset: Supported 00:20:23.698 Transport SGL Data Block: Not Supported 00:20:23.698 Replay Protected Memory Block: Not Supported 00:20:23.698 00:20:23.698 Firmware Slot Information 00:20:23.698 ========================= 00:20:23.698 Active slot: 1 00:20:23.698 Slot 1 Firmware Revision: 25.01 00:20:23.698 00:20:23.698 00:20:23.698 Commands Supported and Effects 00:20:23.698 ============================== 00:20:23.698 Admin Commands 00:20:23.698 -------------- 00:20:23.698 Get Log Page (02h): Supported 00:20:23.699 Identify (06h): Supported 00:20:23.699 Abort (08h): Supported 00:20:23.699 Set Features (09h): Supported 00:20:23.699 Get Features (0Ah): Supported 00:20:23.699 Asynchronous Event Request (0Ch): Supported 00:20:23.699 Keep Alive (18h): Supported 00:20:23.699 I/O Commands 00:20:23.699 ------------ 00:20:23.699 Flush (00h): Supported LBA-Change 00:20:23.699 Write (01h): Supported LBA-Change 00:20:23.699 Read (02h): Supported 00:20:23.699 Compare (05h): Supported 00:20:23.699 Write Zeroes (08h): Supported LBA-Change 00:20:23.699 Dataset Management (09h): Supported LBA-Change 00:20:23.699 Copy (19h): Supported LBA-Change 00:20:23.699 00:20:23.699 Error Log 00:20:23.699 ========= 00:20:23.699 00:20:23.699 Arbitration 00:20:23.699 =========== 00:20:23.699 Arbitration Burst: 1 00:20:23.699 00:20:23.699 Power Management 00:20:23.699 ================ 00:20:23.699 Number of Power States: 1 00:20:23.699 Current Power State: Power State #0 00:20:23.699 Power State #0: 00:20:23.699 Max Power: 0.00 W 00:20:23.699 Non-Operational State: Operational 00:20:23.699 Entry Latency: Not Reported 00:20:23.699 Exit Latency: Not Reported 00:20:23.699 Relative Read Throughput: 0 00:20:23.699 Relative Read Latency: 0 00:20:23.699 Relative Write Throughput: 0 00:20:23.699 Relative Write Latency: 0 00:20:23.699 Idle Power: Not Reported 00:20:23.699 Active Power: Not Reported 00:20:23.699 Non-Operational Permissive Mode: Not Supported 00:20:23.699 00:20:23.699 Health Information 00:20:23.699 ================== 00:20:23.699 Critical Warnings: 00:20:23.699 Available Spare Space: OK 00:20:23.699 Temperature: OK 00:20:23.699 Device Reliability: OK 00:20:23.699 Read Only: No 00:20:23.699 Volatile Memory Backup: OK 00:20:23.699 Current Temperature: 0 Kelvin (-273 Celsius) 00:20:23.699 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:20:23.699 Available Spare: 0% 00:20:23.699 Available Spare Threshold: 0% 00:20:23.699 Life Percentage Used:[2024-11-19 00:04:30.203102] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.699 [2024-11-19 00:04:30.203114] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x61500000f080) 00:20:23.699 [2024-11-19 00:04:30.203129] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.699 [2024-11-19 00:04:30.203163] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001bb80, cid 7, qid 0 00:20:23.699 [2024-11-19 00:04:30.203241] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.699 [2024-11-19 00:04:30.203254] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.699 [2024-11-19 00:04:30.203261] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.699 [2024-11-19 00:04:30.203269] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001bb80) on tqpair=0x61500000f080 00:20:23.699 [2024-11-19 00:04:30.203365] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:20:23.699 [2024-11-19 00:04:30.203403] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:23.699 [2024-11-19 00:04:30.203419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.699 [2024-11-19 00:04:30.203429] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b280) on tqpair=0x61500000f080 00:20:23.699 [2024-11-19 00:04:30.203438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.699 [2024-11-19 00:04:30.203446] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b400) on tqpair=0x61500000f080 00:20:23.699 [2024-11-19 00:04:30.203455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.699 [2024-11-19 00:04:30.203463] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:23.699 [2024-11-19 00:04:30.203471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.699 [2024-11-19 00:04:30.203486] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.699 [2024-11-19 00:04:30.203495] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.699 [2024-11-19 00:04:30.203502] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:23.699 [2024-11-19 00:04:30.203516] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.699 [2024-11-19 00:04:30.203561] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:23.699 [2024-11-19 00:04:30.203650] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.699 [2024-11-19 00:04:30.203666] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.699 [2024-11-19 00:04:30.203673] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.699 [2024-11-19 00:04:30.203681] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:23.699 [2024-11-19 00:04:30.203696] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.699 [2024-11-19 00:04:30.203705] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.699 [2024-11-19 00:04:30.203712] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:23.699 [2024-11-19 00:04:30.203727] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.699 [2024-11-19 00:04:30.203772] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:23.699 [2024-11-19 00:04:30.203874] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.699 [2024-11-19 00:04:30.203890] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.699 [2024-11-19 00:04:30.203897] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.699 [2024-11-19 00:04:30.203904] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:23.699 [2024-11-19 00:04:30.203914] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:20:23.699 [2024-11-19 00:04:30.203924] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:20:23.699 [2024-11-19 00:04:30.203941] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.699 [2024-11-19 00:04:30.203950] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.699 [2024-11-19 00:04:30.203958] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:23.699 [2024-11-19 00:04:30.203986] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.699 [2024-11-19 00:04:30.204014] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:23.699 [2024-11-19 00:04:30.204081] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.699 [2024-11-19 00:04:30.204093] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.699 [2024-11-19 00:04:30.204099] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.699 [2024-11-19 00:04:30.204106] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:23.699 [2024-11-19 00:04:30.204123] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.699 [2024-11-19 00:04:30.204132] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.699 [2024-11-19 00:04:30.204138] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:23.699 [2024-11-19 00:04:30.204150] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.699 [2024-11-19 00:04:30.204176] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:23.699 [2024-11-19 00:04:30.204276] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.699 [2024-11-19 00:04:30.204290] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.699 [2024-11-19 00:04:30.204297] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.699 [2024-11-19 00:04:30.204304] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:23.699 [2024-11-19 00:04:30.204326] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.699 [2024-11-19 00:04:30.204336] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.699 [2024-11-19 00:04:30.204345] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:23.699 [2024-11-19 00:04:30.204359] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.699 [2024-11-19 00:04:30.204387] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:23.699 [2024-11-19 00:04:30.204467] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.699 [2024-11-19 00:04:30.204480] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.699 [2024-11-19 00:04:30.204486] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.700 [2024-11-19 00:04:30.204497] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:23.700 [2024-11-19 00:04:30.204515] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.700 [2024-11-19 00:04:30.204524] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.700 [2024-11-19 00:04:30.204531] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:23.700 [2024-11-19 00:04:30.204548] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.700 [2024-11-19 00:04:30.204591] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:23.700 [2024-11-19 00:04:30.208632] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.700 [2024-11-19 00:04:30.208663] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.700 [2024-11-19 00:04:30.208677] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.700 [2024-11-19 00:04:30.208687] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:23.700 [2024-11-19 00:04:30.208709] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.700 [2024-11-19 00:04:30.208719] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.700 [2024-11-19 00:04:30.208726] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:23.700 [2024-11-19 00:04:30.208742] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.700 [2024-11-19 00:04:30.208776] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:23.700 [2024-11-19 00:04:30.208855] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.700 [2024-11-19 00:04:30.208868] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.700 [2024-11-19 00:04:30.208875] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.700 [2024-11-19 00:04:30.208883] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:23.700 [2024-11-19 00:04:30.208898] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 4 milliseconds 00:20:23.700 0% 00:20:23.700 Data Units Read: 0 00:20:23.700 Data Units Written: 0 00:20:23.700 Host Read Commands: 0 00:20:23.700 Host Write Commands: 0 00:20:23.700 Controller Busy Time: 0 minutes 00:20:23.700 Power Cycles: 0 00:20:23.700 Power On Hours: 0 hours 00:20:23.700 Unsafe Shutdowns: 0 00:20:23.700 Unrecoverable Media Errors: 0 00:20:23.700 Lifetime Error Log Entries: 0 00:20:23.700 Warning Temperature Time: 0 minutes 00:20:23.700 Critical Temperature Time: 0 minutes 00:20:23.700 00:20:23.700 Number of Queues 00:20:23.700 ================ 00:20:23.700 Number of I/O Submission Queues: 127 00:20:23.700 Number of I/O Completion Queues: 127 00:20:23.700 00:20:23.700 Active Namespaces 00:20:23.700 ================= 00:20:23.700 Namespace ID:1 00:20:23.700 Error Recovery Timeout: Unlimited 00:20:23.700 Command Set Identifier: NVM (00h) 00:20:23.700 Deallocate: Supported 00:20:23.700 Deallocated/Unwritten Error: Not Supported 00:20:23.700 Deallocated Read Value: Unknown 00:20:23.700 Deallocate in Write Zeroes: Not Supported 00:20:23.700 Deallocated Guard Field: 0xFFFF 00:20:23.700 Flush: Supported 00:20:23.700 Reservation: Supported 00:20:23.700 Namespace Sharing Capabilities: Multiple Controllers 00:20:23.700 Size (in LBAs): 131072 (0GiB) 00:20:23.700 Capacity (in LBAs): 131072 (0GiB) 00:20:23.700 Utilization (in LBAs): 131072 (0GiB) 00:20:23.700 NGUID: ABCDEF0123456789ABCDEF0123456789 00:20:23.700 EUI64: ABCDEF0123456789 00:20:23.700 UUID: 5bc267fd-f2cb-45b0-8a60-f3a97ed7fb35 00:20:23.700 Thin Provisioning: Not Supported 00:20:23.700 Per-NS Atomic Units: Yes 00:20:23.700 Atomic Boundary Size (Normal): 0 00:20:23.700 Atomic Boundary Size (PFail): 0 00:20:23.700 Atomic Boundary Offset: 0 00:20:23.700 Maximum Single Source Range Length: 65535 00:20:23.700 Maximum Copy Length: 65535 00:20:23.700 Maximum Source Range Count: 1 00:20:23.700 NGUID/EUI64 Never Reused: No 00:20:23.700 Namespace Write Protected: No 00:20:23.700 Number of LBA Formats: 1 00:20:23.700 Current LBA Format: LBA Format #00 00:20:23.700 LBA Format #00: Data Size: 512 Metadata Size: 0 00:20:23.700 00:20:23.700 00:04:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:20:23.700 00:04:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:23.700 00:04:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.700 00:04:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:23.700 00:04:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.700 00:04:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:20:23.700 00:04:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:20:23.700 00:04:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:23.700 00:04:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:20:23.700 00:04:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:23.700 00:04:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:20:23.700 00:04:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:23.700 00:04:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:23.700 rmmod nvme_tcp 00:20:23.700 rmmod nvme_fabrics 00:20:23.700 rmmod nvme_keyring 00:20:23.958 00:04:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:23.958 00:04:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:20:23.958 00:04:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:20:23.958 00:04:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 79413 ']' 00:20:23.958 00:04:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 79413 00:20:23.958 00:04:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 79413 ']' 00:20:23.958 00:04:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 79413 00:20:23.958 00:04:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:20:23.958 00:04:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:23.958 00:04:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79413 00:20:23.958 00:04:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:23.958 00:04:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:23.958 killing process with pid 79413 00:20:23.958 00:04:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79413' 00:20:23.958 00:04:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 79413 00:20:23.958 00:04:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 79413 00:20:24.895 00:04:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:24.895 00:04:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:24.895 00:04:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:24.895 00:04:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:20:24.895 00:04:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:24.895 00:04:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:20:24.895 00:04:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:20:24.895 00:04:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:24.895 00:04:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:24.895 00:04:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:24.895 00:04:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:24.895 00:04:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:24.895 00:04:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:24.895 00:04:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:24.895 00:04:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:24.895 00:04:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:24.895 00:04:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:24.895 00:04:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:25.154 00:04:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:25.154 00:04:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:25.154 00:04:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:25.154 00:04:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:25.154 00:04:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:25.154 00:04:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:25.154 00:04:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:25.154 00:04:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:25.154 00:04:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@300 -- # return 0 00:20:25.154 00:20:25.154 real 0m4.014s 00:20:25.154 user 0m11.200s 00:20:25.154 sys 0m0.916s 00:20:25.154 00:04:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:25.154 00:04:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:25.154 ************************************ 00:20:25.154 END TEST nvmf_identify 00:20:25.154 ************************************ 00:20:25.154 00:04:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:20:25.154 00:04:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:25.154 00:04:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:25.154 00:04:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:25.154 ************************************ 00:20:25.154 START TEST nvmf_perf 00:20:25.154 ************************************ 00:20:25.154 00:04:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:20:25.413 * Looking for test storage... 00:20:25.413 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:25.413 00:04:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:25.413 00:04:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:20:25.413 00:04:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:25.413 00:04:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:25.413 00:04:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:25.413 00:04:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:25.413 00:04:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:25.413 00:04:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:20:25.413 00:04:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:20:25.413 00:04:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:20:25.413 00:04:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:20:25.413 00:04:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:20:25.413 00:04:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:20:25.413 00:04:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:20:25.413 00:04:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:25.413 00:04:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:20:25.413 00:04:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:20:25.413 00:04:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:25.413 00:04:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:25.413 00:04:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:20:25.413 00:04:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:20:25.413 00:04:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:25.413 00:04:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:20:25.413 00:04:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:20:25.413 00:04:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:20:25.413 00:04:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:20:25.413 00:04:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:25.413 00:04:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:20:25.413 00:04:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:20:25.413 00:04:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:25.413 00:04:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:25.413 00:04:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:20:25.413 00:04:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:25.413 00:04:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:25.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:25.413 --rc genhtml_branch_coverage=1 00:20:25.413 --rc genhtml_function_coverage=1 00:20:25.413 --rc genhtml_legend=1 00:20:25.413 --rc geninfo_all_blocks=1 00:20:25.413 --rc geninfo_unexecuted_blocks=1 00:20:25.413 00:20:25.413 ' 00:20:25.413 00:04:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:25.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:25.413 --rc genhtml_branch_coverage=1 00:20:25.413 --rc genhtml_function_coverage=1 00:20:25.413 --rc genhtml_legend=1 00:20:25.413 --rc geninfo_all_blocks=1 00:20:25.413 --rc geninfo_unexecuted_blocks=1 00:20:25.413 00:20:25.413 ' 00:20:25.413 00:04:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:25.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:25.413 --rc genhtml_branch_coverage=1 00:20:25.413 --rc genhtml_function_coverage=1 00:20:25.413 --rc genhtml_legend=1 00:20:25.413 --rc geninfo_all_blocks=1 00:20:25.413 --rc geninfo_unexecuted_blocks=1 00:20:25.413 00:20:25.413 ' 00:20:25.413 00:04:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:25.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:25.414 --rc genhtml_branch_coverage=1 00:20:25.414 --rc genhtml_function_coverage=1 00:20:25.414 --rc genhtml_legend=1 00:20:25.414 --rc geninfo_all_blocks=1 00:20:25.414 --rc geninfo_unexecuted_blocks=1 00:20:25.414 00:20:25.414 ' 00:20:25.414 00:04:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:25.414 00:04:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:20:25.414 00:04:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:25.414 00:04:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:25.414 00:04:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:25.414 00:04:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:25.414 00:04:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:25.414 00:04:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:25.414 00:04:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:25.414 00:04:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:25.414 00:04:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:25.414 00:04:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:25.414 00:04:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:20:25.414 00:04:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:20:25.414 00:04:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:25.414 00:04:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:25.414 00:04:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:25.414 00:04:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:25.414 00:04:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:25.414 00:04:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:20:25.414 00:04:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:25.414 00:04:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:25.414 00:04:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:25.414 00:04:31 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:25.414 00:04:31 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:25.414 00:04:31 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:25.414 00:04:31 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:20:25.414 00:04:31 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:25.414 00:04:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:20:25.414 00:04:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:25.414 00:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:25.414 00:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:25.414 00:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:25.414 00:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:25.414 00:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:25.414 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:25.414 00:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:25.414 00:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:25.414 00:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:25.414 00:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:25.414 00:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:25.414 00:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:25.414 00:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:20:25.414 00:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:25.414 00:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:25.414 00:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:25.414 00:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:25.414 00:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:25.414 00:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:25.414 00:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:25.414 00:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:25.414 00:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:20:25.414 00:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:20:25.414 00:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:20:25.414 00:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:20:25.414 00:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:20:25.414 00:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:20:25.414 00:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:25.414 00:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:25.414 00:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:25.414 00:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:25.414 00:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:25.414 00:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:25.414 00:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:25.414 00:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:25.414 00:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:25.414 00:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:25.414 00:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:25.414 00:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:25.414 00:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:25.414 00:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:25.414 00:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:25.414 00:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:25.414 00:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:25.414 Cannot find device "nvmf_init_br" 00:20:25.414 00:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # true 00:20:25.414 00:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:25.414 Cannot find device "nvmf_init_br2" 00:20:25.414 00:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # true 00:20:25.414 00:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:25.414 Cannot find device "nvmf_tgt_br" 00:20:25.414 00:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # true 00:20:25.414 00:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:25.414 Cannot find device "nvmf_tgt_br2" 00:20:25.414 00:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # true 00:20:25.414 00:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:25.414 Cannot find device "nvmf_init_br" 00:20:25.414 00:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # true 00:20:25.414 00:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:25.414 Cannot find device "nvmf_init_br2" 00:20:25.414 00:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # true 00:20:25.414 00:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:25.414 Cannot find device "nvmf_tgt_br" 00:20:25.414 00:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # true 00:20:25.414 00:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:25.414 Cannot find device "nvmf_tgt_br2" 00:20:25.414 00:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # true 00:20:25.414 00:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:25.673 Cannot find device "nvmf_br" 00:20:25.673 00:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # true 00:20:25.673 00:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:25.673 Cannot find device "nvmf_init_if" 00:20:25.673 00:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # true 00:20:25.673 00:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:25.673 Cannot find device "nvmf_init_if2" 00:20:25.673 00:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # true 00:20:25.673 00:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:25.673 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:25.673 00:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # true 00:20:25.673 00:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:25.673 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:25.673 00:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # true 00:20:25.673 00:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:25.673 00:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:25.673 00:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:25.673 00:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:25.673 00:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:25.673 00:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:25.673 00:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:25.673 00:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:25.673 00:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:25.673 00:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:25.673 00:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:25.673 00:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:25.673 00:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:25.673 00:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:25.673 00:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:25.673 00:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:25.673 00:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:25.673 00:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:25.673 00:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:25.673 00:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:25.673 00:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:25.673 00:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:25.673 00:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:25.673 00:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:25.933 00:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:25.933 00:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:25.933 00:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:25.933 00:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:25.933 00:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:25.933 00:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:25.933 00:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:25.933 00:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:25.933 00:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:25.933 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:25.933 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:20:25.933 00:20:25.933 --- 10.0.0.3 ping statistics --- 00:20:25.933 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:25.933 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:20:25.933 00:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:25.933 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:25.933 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.052 ms 00:20:25.933 00:20:25.933 --- 10.0.0.4 ping statistics --- 00:20:25.933 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:25.933 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:20:25.933 00:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:25.933 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:25.933 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:20:25.933 00:20:25.933 --- 10.0.0.1 ping statistics --- 00:20:25.933 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:25.933 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:20:25.933 00:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:25.933 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:25.933 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.116 ms 00:20:25.933 00:20:25.933 --- 10.0.0.2 ping statistics --- 00:20:25.933 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:25.933 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:20:25.933 00:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:25.933 00:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@461 -- # return 0 00:20:25.933 00:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:25.933 00:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:25.933 00:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:25.933 00:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:25.933 00:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:25.933 00:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:25.933 00:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:25.933 00:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:20:25.933 00:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:25.933 00:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:25.933 00:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:25.933 00:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=79684 00:20:25.933 00:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 79684 00:20:25.933 00:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:25.933 00:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 79684 ']' 00:20:25.933 00:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:25.933 00:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:25.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:25.933 00:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:25.933 00:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:25.933 00:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:25.933 [2024-11-19 00:04:32.588830] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:20:25.933 [2024-11-19 00:04:32.589216] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:26.193 [2024-11-19 00:04:32.779232] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:26.451 [2024-11-19 00:04:32.893988] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:26.451 [2024-11-19 00:04:32.894066] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:26.451 [2024-11-19 00:04:32.894101] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:26.451 [2024-11-19 00:04:32.894129] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:26.451 [2024-11-19 00:04:32.894142] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:26.451 [2024-11-19 00:04:32.896233] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:26.451 [2024-11-19 00:04:32.896349] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:26.451 [2024-11-19 00:04:32.896501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:26.451 [2024-11-19 00:04:32.897102] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:26.451 [2024-11-19 00:04:33.088723] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:27.019 00:04:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:27.019 00:04:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:20:27.019 00:04:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:27.019 00:04:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:27.019 00:04:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:27.019 00:04:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:27.019 00:04:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:20:27.019 00:04:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:20:27.588 00:04:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:20:27.588 00:04:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:20:27.847 00:04:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:20:27.847 00:04:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:28.105 00:04:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:20:28.105 00:04:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:20:28.105 00:04:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:20:28.105 00:04:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:20:28.105 00:04:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:28.364 [2024-11-19 00:04:34.953017] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:28.364 00:04:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:28.623 00:04:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:28.623 00:04:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:28.882 00:04:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:28.882 00:04:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:20:29.141 00:04:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:29.401 [2024-11-19 00:04:35.964408] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:29.401 00:04:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:20:29.660 00:04:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:20:29.660 00:04:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:20:29.660 00:04:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:20:29.660 00:04:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:20:31.036 Initializing NVMe Controllers 00:20:31.036 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:20:31.036 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:20:31.036 Initialization complete. Launching workers. 00:20:31.036 ======================================================== 00:20:31.036 Latency(us) 00:20:31.036 Device Information : IOPS MiB/s Average min max 00:20:31.036 PCIE (0000:00:10.0) NSID 1 from core 0: 21396.00 83.58 1494.65 321.58 6159.93 00:20:31.036 ======================================================== 00:20:31.036 Total : 21396.00 83.58 1494.65 321.58 6159.93 00:20:31.036 00:20:31.036 00:04:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:20:32.414 Initializing NVMe Controllers 00:20:32.414 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:20:32.414 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:32.414 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:32.414 Initialization complete. Launching workers. 00:20:32.414 ======================================================== 00:20:32.414 Latency(us) 00:20:32.414 Device Information : IOPS MiB/s Average min max 00:20:32.414 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2754.31 10.76 362.06 131.53 6285.14 00:20:32.414 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 138.71 0.54 7272.64 1911.91 11923.34 00:20:32.414 ======================================================== 00:20:32.414 Total : 2893.02 11.30 693.41 131.53 11923.34 00:20:32.414 00:20:32.414 00:04:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:20:33.796 Initializing NVMe Controllers 00:20:33.796 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:20:33.796 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:33.796 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:33.796 Initialization complete. Launching workers. 00:20:33.796 ======================================================== 00:20:33.796 Latency(us) 00:20:33.796 Device Information : IOPS MiB/s Average min max 00:20:33.796 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6791.40 26.53 4711.97 685.80 9878.47 00:20:33.796 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3881.52 15.16 8283.10 4905.26 15604.35 00:20:33.796 ======================================================== 00:20:33.796 Total : 10672.92 41.69 6010.72 685.80 15604.35 00:20:33.796 00:20:33.796 00:04:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:20:33.796 00:04:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:20:36.429 Initializing NVMe Controllers 00:20:36.429 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:20:36.429 Controller IO queue size 128, less than required. 00:20:36.429 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:36.429 Controller IO queue size 128, less than required. 00:20:36.429 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:36.429 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:36.429 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:36.429 Initialization complete. Launching workers. 00:20:36.429 ======================================================== 00:20:36.429 Latency(us) 00:20:36.429 Device Information : IOPS MiB/s Average min max 00:20:36.429 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1448.39 362.10 90321.57 42835.63 206869.23 00:20:36.430 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 608.19 152.05 230927.80 79900.48 518746.93 00:20:36.430 ======================================================== 00:20:36.430 Total : 2056.58 514.14 131903.08 42835.63 518746.93 00:20:36.430 00:20:36.688 00:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0xf -P 4 00:20:36.947 Initializing NVMe Controllers 00:20:36.947 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:20:36.947 Controller IO queue size 128, less than required. 00:20:36.947 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:36.947 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:20:36.947 Controller IO queue size 128, less than required. 00:20:36.947 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:36.947 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:20:36.947 WARNING: Some requested NVMe devices were skipped 00:20:36.947 No valid NVMe controllers or AIO or URING devices found 00:20:37.206 00:04:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' --transport-stat 00:20:40.497 Initializing NVMe Controllers 00:20:40.497 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:20:40.497 Controller IO queue size 128, less than required. 00:20:40.497 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:40.497 Controller IO queue size 128, less than required. 00:20:40.497 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:40.497 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:40.497 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:40.497 Initialization complete. Launching workers. 00:20:40.497 00:20:40.497 ==================== 00:20:40.497 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:20:40.497 TCP transport: 00:20:40.497 polls: 6863 00:20:40.497 idle_polls: 3392 00:20:40.497 sock_completions: 3471 00:20:40.497 nvme_completions: 5807 00:20:40.497 submitted_requests: 8670 00:20:40.497 queued_requests: 1 00:20:40.497 00:20:40.497 ==================== 00:20:40.497 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:20:40.497 TCP transport: 00:20:40.497 polls: 9645 00:20:40.497 idle_polls: 5761 00:20:40.497 sock_completions: 3884 00:20:40.497 nvme_completions: 5941 00:20:40.497 submitted_requests: 8866 00:20:40.497 queued_requests: 1 00:20:40.497 ======================================================== 00:20:40.497 Latency(us) 00:20:40.497 Device Information : IOPS MiB/s Average min max 00:20:40.497 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1451.37 362.84 90423.69 47803.22 255173.28 00:20:40.497 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1484.87 371.22 89116.96 47599.46 348052.34 00:20:40.497 ======================================================== 00:20:40.497 Total : 2936.24 734.06 89762.87 47599.46 348052.34 00:20:40.497 00:20:40.497 00:04:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:20:40.497 00:04:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:40.497 00:04:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:20:40.497 00:04:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:00:10.0 ']' 00:20:40.497 00:04:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:20:40.497 00:04:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=23c47f0f-32e4-447c-b4d5-b02c824623a0 00:20:40.497 00:04:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 23c47f0f-32e4-447c-b4d5-b02c824623a0 00:20:40.497 00:04:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=23c47f0f-32e4-447c-b4d5-b02c824623a0 00:20:40.497 00:04:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info 00:20:40.497 00:04:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc 00:20:40.497 00:04:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs 00:20:40.497 00:04:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:40.756 00:04:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:20:40.756 { 00:20:40.756 "uuid": "23c47f0f-32e4-447c-b4d5-b02c824623a0", 00:20:40.756 "name": "lvs_0", 00:20:40.756 "base_bdev": "Nvme0n1", 00:20:40.756 "total_data_clusters": 1278, 00:20:40.756 "free_clusters": 1278, 00:20:40.756 "block_size": 4096, 00:20:40.756 "cluster_size": 4194304 00:20:40.756 } 00:20:40.756 ]' 00:20:40.756 00:04:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="23c47f0f-32e4-447c-b4d5-b02c824623a0") .free_clusters' 00:20:41.015 00:04:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=1278 00:20:41.015 00:04:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="23c47f0f-32e4-447c-b4d5-b02c824623a0") .cluster_size' 00:20:41.015 5112 00:20:41.015 00:04:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304 00:20:41.015 00:04:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=5112 00:20:41.015 00:04:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 5112 00:20:41.015 00:04:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 5112 -gt 20480 ']' 00:20:41.015 00:04:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 23c47f0f-32e4-447c-b4d5-b02c824623a0 lbd_0 5112 00:20:41.274 00:04:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=72254302-e9a0-4ac9-9cc7-94724056bb6b 00:20:41.274 00:04:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore 72254302-e9a0-4ac9-9cc7-94724056bb6b lvs_n_0 00:20:41.841 00:04:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=5c5edd47-f499-46c3-8d1c-515fcacd62b9 00:20:41.841 00:04:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 5c5edd47-f499-46c3-8d1c-515fcacd62b9 00:20:41.841 00:04:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=5c5edd47-f499-46c3-8d1c-515fcacd62b9 00:20:41.841 00:04:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info 00:20:41.841 00:04:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc 00:20:41.841 00:04:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs 00:20:41.841 00:04:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:41.841 00:04:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:20:41.841 { 00:20:41.841 "uuid": "23c47f0f-32e4-447c-b4d5-b02c824623a0", 00:20:41.841 "name": "lvs_0", 00:20:41.841 "base_bdev": "Nvme0n1", 00:20:41.841 "total_data_clusters": 1278, 00:20:41.841 "free_clusters": 0, 00:20:41.841 "block_size": 4096, 00:20:41.841 "cluster_size": 4194304 00:20:41.841 }, 00:20:41.841 { 00:20:41.841 "uuid": "5c5edd47-f499-46c3-8d1c-515fcacd62b9", 00:20:41.841 "name": "lvs_n_0", 00:20:41.841 "base_bdev": "72254302-e9a0-4ac9-9cc7-94724056bb6b", 00:20:41.841 "total_data_clusters": 1276, 00:20:41.841 "free_clusters": 1276, 00:20:41.841 "block_size": 4096, 00:20:41.841 "cluster_size": 4194304 00:20:41.841 } 00:20:41.841 ]' 00:20:41.841 00:04:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="5c5edd47-f499-46c3-8d1c-515fcacd62b9") .free_clusters' 00:20:42.100 00:04:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=1276 00:20:42.100 00:04:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="5c5edd47-f499-46c3-8d1c-515fcacd62b9") .cluster_size' 00:20:42.100 00:04:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304 00:20:42.100 00:04:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=5104 00:20:42.100 5104 00:20:42.100 00:04:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 5104 00:20:42.100 00:04:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 5104 -gt 20480 ']' 00:20:42.100 00:04:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 5c5edd47-f499-46c3-8d1c-515fcacd62b9 lbd_nest_0 5104 00:20:42.359 00:04:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=be20caa9-e673-4471-8263-83d1c6185bfd 00:20:42.359 00:04:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:42.618 00:04:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:20:42.619 00:04:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 be20caa9-e673-4471-8263-83d1c6185bfd 00:20:42.879 00:04:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:43.137 00:04:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:20:43.137 00:04:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:20:43.137 00:04:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:20:43.137 00:04:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:43.137 00:04:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:20:43.397 Initializing NVMe Controllers 00:20:43.397 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:20:43.397 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:20:43.397 WARNING: Some requested NVMe devices were skipped 00:20:43.397 No valid NVMe controllers or AIO or URING devices found 00:20:43.656 00:04:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:43.657 00:04:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:20:55.864 Initializing NVMe Controllers 00:20:55.864 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:20:55.864 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:55.864 Initialization complete. Launching workers. 00:20:55.864 ======================================================== 00:20:55.864 Latency(us) 00:20:55.864 Device Information : IOPS MiB/s Average min max 00:20:55.864 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 829.20 103.65 1204.34 403.27 9557.51 00:20:55.864 ======================================================== 00:20:55.864 Total : 829.20 103.65 1204.34 403.27 9557.51 00:20:55.864 00:20:55.864 00:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:20:55.864 00:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:55.864 00:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:20:55.864 Initializing NVMe Controllers 00:20:55.864 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:20:55.864 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:20:55.864 WARNING: Some requested NVMe devices were skipped 00:20:55.864 No valid NVMe controllers or AIO or URING devices found 00:20:55.864 00:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:55.864 00:05:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:21:05.913 Initializing NVMe Controllers 00:21:05.913 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:21:05.913 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:05.913 Initialization complete. Launching workers. 00:21:05.913 ======================================================== 00:21:05.913 Latency(us) 00:21:05.913 Device Information : IOPS MiB/s Average min max 00:21:05.913 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1358.86 169.86 23580.90 6369.25 63511.84 00:21:05.913 ======================================================== 00:21:05.913 Total : 1358.86 169.86 23580.90 6369.25 63511.84 00:21:05.913 00:21:05.913 00:05:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:21:05.913 00:05:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:21:05.913 00:05:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:21:05.913 Initializing NVMe Controllers 00:21:05.913 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:21:05.913 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:21:05.913 WARNING: Some requested NVMe devices were skipped 00:21:05.913 No valid NVMe controllers or AIO or URING devices found 00:21:05.913 00:05:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:21:05.913 00:05:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:21:15.895 Initializing NVMe Controllers 00:21:15.895 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:21:15.895 Controller IO queue size 128, less than required. 00:21:15.895 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:15.895 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:15.895 Initialization complete. Launching workers. 00:21:15.895 ======================================================== 00:21:15.895 Latency(us) 00:21:15.895 Device Information : IOPS MiB/s Average min max 00:21:15.895 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3673.00 459.12 34892.83 14303.96 83194.58 00:21:15.895 ======================================================== 00:21:15.895 Total : 3673.00 459.12 34892.83 14303.96 83194.58 00:21:15.895 00:21:15.895 00:05:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:16.153 00:05:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete be20caa9-e673-4471-8263-83d1c6185bfd 00:21:16.412 00:05:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:21:16.671 00:05:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 72254302-e9a0-4ac9-9cc7-94724056bb6b 00:21:16.930 00:05:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:21:17.189 00:05:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:21:17.189 00:05:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:21:17.189 00:05:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:17.189 00:05:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:21:17.189 00:05:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:17.189 00:05:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:21:17.189 00:05:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:17.189 00:05:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:17.189 rmmod nvme_tcp 00:21:17.190 rmmod nvme_fabrics 00:21:17.190 rmmod nvme_keyring 00:21:17.190 00:05:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:17.190 00:05:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:21:17.190 00:05:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:21:17.190 00:05:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 79684 ']' 00:21:17.190 00:05:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 79684 00:21:17.190 00:05:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 79684 ']' 00:21:17.190 00:05:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 79684 00:21:17.190 00:05:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:21:17.190 00:05:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:17.190 00:05:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79684 00:21:17.190 killing process with pid 79684 00:21:17.190 00:05:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:17.190 00:05:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:17.190 00:05:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79684' 00:21:17.190 00:05:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 79684 00:21:17.190 00:05:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 79684 00:21:19.725 00:05:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:19.725 00:05:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:19.725 00:05:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:19.725 00:05:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:21:19.725 00:05:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:21:19.725 00:05:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:19.725 00:05:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:21:19.725 00:05:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:19.725 00:05:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:19.725 00:05:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:19.725 00:05:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:19.725 00:05:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:19.725 00:05:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:19.725 00:05:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:19.725 00:05:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:19.725 00:05:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:19.725 00:05:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:19.725 00:05:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:19.725 00:05:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:19.725 00:05:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:19.726 00:05:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:19.726 00:05:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:19.726 00:05:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:19.726 00:05:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:19.726 00:05:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:19.726 00:05:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:19.726 00:05:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@300 -- # return 0 00:21:19.726 00:21:19.726 real 0m54.513s 00:21:19.726 user 3m25.275s 00:21:19.726 sys 0m11.910s 00:21:19.726 00:05:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:19.726 00:05:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:19.726 ************************************ 00:21:19.726 END TEST nvmf_perf 00:21:19.726 ************************************ 00:21:19.726 00:05:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:21:19.726 00:05:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:19.726 00:05:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:19.726 00:05:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:19.726 ************************************ 00:21:19.726 START TEST nvmf_fio_host 00:21:19.726 ************************************ 00:21:19.726 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:21:19.986 * Looking for test storage... 00:21:19.986 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:19.986 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:19.986 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:21:19.986 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:19.986 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:19.986 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:19.986 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:19.986 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:19.986 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:21:19.986 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:21:19.986 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:21:19.986 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:21:19.986 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:21:19.986 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:21:19.986 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:21:19.986 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:19.986 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:21:19.986 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:21:19.986 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:19.986 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:19.986 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:21:19.986 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:21:19.986 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:19.986 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:21:19.986 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:21:19.986 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:21:19.986 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:21:19.986 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:19.986 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:21:19.986 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:21:19.986 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:19.986 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:19.986 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:21:19.986 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:19.986 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:19.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:19.986 --rc genhtml_branch_coverage=1 00:21:19.986 --rc genhtml_function_coverage=1 00:21:19.986 --rc genhtml_legend=1 00:21:19.986 --rc geninfo_all_blocks=1 00:21:19.986 --rc geninfo_unexecuted_blocks=1 00:21:19.986 00:21:19.986 ' 00:21:19.986 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:19.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:19.986 --rc genhtml_branch_coverage=1 00:21:19.986 --rc genhtml_function_coverage=1 00:21:19.986 --rc genhtml_legend=1 00:21:19.986 --rc geninfo_all_blocks=1 00:21:19.986 --rc geninfo_unexecuted_blocks=1 00:21:19.986 00:21:19.986 ' 00:21:19.986 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:19.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:19.986 --rc genhtml_branch_coverage=1 00:21:19.986 --rc genhtml_function_coverage=1 00:21:19.986 --rc genhtml_legend=1 00:21:19.986 --rc geninfo_all_blocks=1 00:21:19.986 --rc geninfo_unexecuted_blocks=1 00:21:19.986 00:21:19.986 ' 00:21:19.986 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:19.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:19.986 --rc genhtml_branch_coverage=1 00:21:19.986 --rc genhtml_function_coverage=1 00:21:19.986 --rc genhtml_legend=1 00:21:19.986 --rc geninfo_all_blocks=1 00:21:19.986 --rc geninfo_unexecuted_blocks=1 00:21:19.986 00:21:19.986 ' 00:21:19.986 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:19.986 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:21:19.986 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:19.986 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:19.986 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:19.986 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:19.986 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:19.986 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:19.986 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:21:19.987 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:19.987 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:19.987 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:21:19.987 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:19.987 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:19.987 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:19.987 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:19.987 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:19.987 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:19.987 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:19.987 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:19.987 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:19.987 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:19.987 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:21:19.987 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:21:19.987 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:19.987 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:19.987 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:19.987 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:19.987 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:19.987 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:21:19.987 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:19.987 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:19.987 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:19.987 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:19.987 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:19.987 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:19.987 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:21:19.987 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:19.987 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:21:19.987 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:19.987 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:19.987 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:19.987 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:19.987 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:19.987 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:19.987 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:19.987 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:19.987 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:19.987 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:19.987 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:19.987 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:21:19.987 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:19.987 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:19.987 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:19.987 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:19.987 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:19.987 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:19.987 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:19.987 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:19.987 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:21:19.987 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:21:19.987 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:21:19.987 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:21:19.987 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:21:19.987 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:21:19.987 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:19.987 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:21:19.987 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:21:19.987 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:19.987 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:19.987 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:21:19.987 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:19.987 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:21:19.987 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:19.987 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:21:19.987 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:19.987 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:19.987 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:19.987 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:19.987 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:19.987 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:19.987 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:21:19.987 Cannot find device "nvmf_init_br" 00:21:19.987 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:21:19.987 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:21:19.987 Cannot find device "nvmf_init_br2" 00:21:19.987 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:21:19.987 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:21:19.987 Cannot find device "nvmf_tgt_br" 00:21:19.987 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # true 00:21:19.987 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:21:19.987 Cannot find device "nvmf_tgt_br2" 00:21:19.987 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # true 00:21:19.987 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:21:19.987 Cannot find device "nvmf_init_br" 00:21:19.987 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # true 00:21:19.987 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:21:19.987 Cannot find device "nvmf_init_br2" 00:21:19.987 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # true 00:21:19.987 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:21:19.987 Cannot find device "nvmf_tgt_br" 00:21:19.987 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # true 00:21:19.987 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:21:19.987 Cannot find device "nvmf_tgt_br2" 00:21:19.987 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # true 00:21:19.987 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:21:20.246 Cannot find device "nvmf_br" 00:21:20.246 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # true 00:21:20.246 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:21:20.246 Cannot find device "nvmf_init_if" 00:21:20.246 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # true 00:21:20.246 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:21:20.246 Cannot find device "nvmf_init_if2" 00:21:20.246 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # true 00:21:20.246 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:20.246 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:20.246 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # true 00:21:20.246 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:20.246 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:20.246 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # true 00:21:20.246 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:21:20.246 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:20.246 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:21:20.247 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:20.247 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:20.247 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:20.247 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:20.247 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:20.247 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:21:20.247 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:20.247 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:21:20.247 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:21:20.247 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:21:20.247 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:21:20.247 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:21:20.247 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:21:20.247 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:21:20.247 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:20.247 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:20.247 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:20.247 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:21:20.247 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:21:20.247 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:21:20.247 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:21:20.247 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:20.506 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:20.506 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:20.506 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:21:20.506 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:21:20.506 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:21:20.506 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:20.506 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:20.506 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:21:20.506 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:20.506 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.079 ms 00:21:20.506 00:21:20.506 --- 10.0.0.3 ping statistics --- 00:21:20.506 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:20.506 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:21:20.506 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:21:20.506 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:20.506 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.051 ms 00:21:20.506 00:21:20.506 --- 10.0.0.4 ping statistics --- 00:21:20.506 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:20.506 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:21:20.506 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:20.506 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:20.506 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:21:20.506 00:21:20.506 --- 10.0.0.1 ping statistics --- 00:21:20.506 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:20.506 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:21:20.506 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:21:20.506 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:20.506 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:21:20.506 00:21:20.506 --- 10.0.0.2 ping statistics --- 00:21:20.506 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:20.506 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:21:20.506 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:20.506 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@461 -- # return 0 00:21:20.506 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:20.506 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:20.506 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:20.506 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:20.506 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:20.506 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:20.506 00:05:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:20.506 00:05:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:21:20.506 00:05:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:21:20.506 00:05:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:20.506 00:05:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:20.506 00:05:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=80586 00:21:20.506 00:05:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:20.506 00:05:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:20.506 00:05:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 80586 00:21:20.506 00:05:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 80586 ']' 00:21:20.506 00:05:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:20.506 00:05:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:20.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:20.506 00:05:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:20.506 00:05:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:20.506 00:05:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:20.506 [2024-11-19 00:05:27.133894] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:21:20.507 [2024-11-19 00:05:27.134061] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:20.766 [2024-11-19 00:05:27.326892] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:21.024 [2024-11-19 00:05:27.458565] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:21.024 [2024-11-19 00:05:27.458655] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:21.024 [2024-11-19 00:05:27.458682] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:21.024 [2024-11-19 00:05:27.458698] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:21.024 [2024-11-19 00:05:27.458715] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:21.024 [2024-11-19 00:05:27.460937] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:21.024 [2024-11-19 00:05:27.461084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:21.024 [2024-11-19 00:05:27.461225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:21.024 [2024-11-19 00:05:27.461345] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:21.024 [2024-11-19 00:05:27.649696] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:21.591 00:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:21.591 00:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:21:21.591 00:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:21.850 [2024-11-19 00:05:28.389820] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:21.850 00:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:21:21.850 00:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:21.850 00:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:21.850 00:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:21:22.107 Malloc1 00:21:22.107 00:05:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:22.365 00:05:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:22.931 00:05:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:22.931 [2024-11-19 00:05:29.544221] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:22.931 00:05:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:21:23.190 00:05:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:21:23.190 00:05:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:21:23.190 00:05:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:21:23.190 00:05:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:23.190 00:05:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:23.190 00:05:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:23.190 00:05:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:23.190 00:05:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:21:23.190 00:05:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:23.190 00:05:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:23.190 00:05:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:23.190 00:05:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:21:23.190 00:05:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:23.190 00:05:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:21:23.190 00:05:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:21:23.190 00:05:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1351 -- # break 00:21:23.190 00:05:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:21:23.190 00:05:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:21:23.465 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:21:23.465 fio-3.35 00:21:23.465 Starting 1 thread 00:21:26.009 00:21:26.009 test: (groupid=0, jobs=1): err= 0: pid=80656: Tue Nov 19 00:05:32 2024 00:21:26.009 read: IOPS=7463, BW=29.2MiB/s (30.6MB/s)(58.5MiB/2008msec) 00:21:26.009 slat (usec): min=2, max=4248, avg= 3.28, stdev=34.75 00:21:26.009 clat (usec): min=3731, max=15832, avg=8894.01, stdev=666.93 00:21:26.009 lat (usec): min=3754, max=15835, avg=8897.29, stdev=665.61 00:21:26.009 clat percentiles (usec): 00:21:26.009 | 1.00th=[ 7570], 5.00th=[ 7963], 10.00th=[ 8160], 20.00th=[ 8356], 00:21:26.009 | 30.00th=[ 8586], 40.00th=[ 8717], 50.00th=[ 8848], 60.00th=[ 8979], 00:21:26.009 | 70.00th=[ 9110], 80.00th=[ 9372], 90.00th=[ 9634], 95.00th=[10028], 00:21:26.009 | 99.00th=[10683], 99.50th=[11076], 99.90th=[13566], 99.95th=[15008], 00:21:26.009 | 99.99th=[15795] 00:21:26.009 bw ( KiB/s): min=28272, max=31144, per=100.00%, avg=29866.00, stdev=1202.67, samples=4 00:21:26.009 iops : min= 7068, max= 7786, avg=7466.50, stdev=300.67, samples=4 00:21:26.009 write: IOPS=7459, BW=29.1MiB/s (30.6MB/s)(58.5MiB/2008msec); 0 zone resets 00:21:26.009 slat (usec): min=2, max=2142, avg= 3.33, stdev=17.63 00:21:26.009 clat (usec): min=3374, max=15805, avg=8132.97, stdev=649.68 00:21:26.009 lat (usec): min=3392, max=15808, avg=8136.30, stdev=649.74 00:21:26.009 clat percentiles (usec): 00:21:26.009 | 1.00th=[ 6915], 5.00th=[ 7242], 10.00th=[ 7439], 20.00th=[ 7635], 00:21:26.009 | 30.00th=[ 7832], 40.00th=[ 7963], 50.00th=[ 8094], 60.00th=[ 8225], 00:21:26.009 | 70.00th=[ 8356], 80.00th=[ 8586], 90.00th=[ 8848], 95.00th=[ 9110], 00:21:26.009 | 99.00th=[ 9896], 99.50th=[10159], 99.90th=[14353], 99.95th=[15139], 00:21:26.009 | 99.99th=[15795] 00:21:26.009 bw ( KiB/s): min=29056, max=30464, per=99.91%, avg=29812.00, stdev=757.64, samples=4 00:21:26.009 iops : min= 7264, max= 7616, avg=7453.00, stdev=189.41, samples=4 00:21:26.009 lat (msec) : 4=0.02%, 10=97.28%, 20=2.70% 00:21:26.009 cpu : usr=70.40%, sys=21.62%, ctx=22, majf=0, minf=1554 00:21:26.009 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:21:26.009 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:26.010 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:26.010 issued rwts: total=14987,14979,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:26.010 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:26.010 00:21:26.010 Run status group 0 (all jobs): 00:21:26.010 READ: bw=29.2MiB/s (30.6MB/s), 29.2MiB/s-29.2MiB/s (30.6MB/s-30.6MB/s), io=58.5MiB (61.4MB), run=2008-2008msec 00:21:26.010 WRITE: bw=29.1MiB/s (30.6MB/s), 29.1MiB/s-29.1MiB/s (30.6MB/s-30.6MB/s), io=58.5MiB (61.4MB), run=2008-2008msec 00:21:26.010 ----------------------------------------------------- 00:21:26.010 Suppressions used: 00:21:26.010 count bytes template 00:21:26.010 1 57 /usr/src/fio/parse.c 00:21:26.010 1 8 libtcmalloc_minimal.so 00:21:26.010 ----------------------------------------------------- 00:21:26.010 00:21:26.010 00:05:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:21:26.010 00:05:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:21:26.010 00:05:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:26.010 00:05:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:26.010 00:05:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:26.010 00:05:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:26.010 00:05:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:21:26.010 00:05:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:26.010 00:05:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:26.010 00:05:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:26.010 00:05:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:21:26.010 00:05:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:26.010 00:05:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:21:26.010 00:05:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:21:26.010 00:05:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1351 -- # break 00:21:26.010 00:05:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:21:26.010 00:05:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:21:26.268 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:21:26.268 fio-3.35 00:21:26.268 Starting 1 thread 00:21:28.801 00:21:28.801 test: (groupid=0, jobs=1): err= 0: pid=80703: Tue Nov 19 00:05:35 2024 00:21:28.801 read: IOPS=7206, BW=113MiB/s (118MB/s)(226MiB/2006msec) 00:21:28.801 slat (usec): min=3, max=116, avg= 4.41, stdev= 2.56 00:21:28.801 clat (usec): min=2453, max=19611, avg=10045.53, stdev=2853.29 00:21:28.801 lat (usec): min=2457, max=19615, avg=10049.94, stdev=2853.34 00:21:28.801 clat percentiles (usec): 00:21:28.801 | 1.00th=[ 4817], 5.00th=[ 5735], 10.00th=[ 6456], 20.00th=[ 7439], 00:21:28.801 | 30.00th=[ 8356], 40.00th=[ 9110], 50.00th=[ 9896], 60.00th=[10683], 00:21:28.801 | 70.00th=[11338], 80.00th=[12256], 90.00th=[13829], 95.00th=[15401], 00:21:28.801 | 99.00th=[17433], 99.50th=[17957], 99.90th=[19006], 99.95th=[19268], 00:21:28.801 | 99.99th=[19530] 00:21:28.801 bw ( KiB/s): min=50528, max=65504, per=49.22%, avg=56752.00, stdev=7407.45, samples=4 00:21:28.801 iops : min= 3158, max= 4094, avg=3547.00, stdev=462.97, samples=4 00:21:28.801 write: IOPS=4058, BW=63.4MiB/s (66.5MB/s)(116MiB/1832msec); 0 zone resets 00:21:28.801 slat (usec): min=32, max=223, avg=39.34, stdev= 9.46 00:21:28.801 clat (usec): min=3592, max=23556, avg=14143.57, stdev=2834.94 00:21:28.801 lat (usec): min=3652, max=23590, avg=14182.91, stdev=2836.55 00:21:28.801 clat percentiles (usec): 00:21:28.801 | 1.00th=[ 8848], 5.00th=[10290], 10.00th=[10814], 20.00th=[11731], 00:21:28.801 | 30.00th=[12387], 40.00th=[12911], 50.00th=[13698], 60.00th=[14615], 00:21:28.801 | 70.00th=[15401], 80.00th=[16712], 90.00th=[18220], 95.00th=[19268], 00:21:28.801 | 99.00th=[21103], 99.50th=[21890], 99.90th=[22938], 99.95th=[23200], 00:21:28.801 | 99.99th=[23462] 00:21:28.801 bw ( KiB/s): min=52416, max=68256, per=90.73%, avg=58920.00, stdev=7594.79, samples=4 00:21:28.801 iops : min= 3276, max= 4266, avg=3682.50, stdev=474.67, samples=4 00:21:28.801 lat (msec) : 4=0.17%, 10=34.85%, 20=63.97%, 50=1.00% 00:21:28.801 cpu : usr=81.05%, sys=14.61%, ctx=6, majf=0, minf=2196 00:21:28.801 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:21:28.801 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:28.801 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:28.801 issued rwts: total=14457,7436,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:28.801 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:28.801 00:21:28.801 Run status group 0 (all jobs): 00:21:28.801 READ: bw=113MiB/s (118MB/s), 113MiB/s-113MiB/s (118MB/s-118MB/s), io=226MiB (237MB), run=2006-2006msec 00:21:28.801 WRITE: bw=63.4MiB/s (66.5MB/s), 63.4MiB/s-63.4MiB/s (66.5MB/s-66.5MB/s), io=116MiB (122MB), run=1832-1832msec 00:21:28.801 ----------------------------------------------------- 00:21:28.801 Suppressions used: 00:21:28.801 count bytes template 00:21:28.801 1 57 /usr/src/fio/parse.c 00:21:28.801 129 12384 /usr/src/fio/iolog.c 00:21:28.801 1 8 libtcmalloc_minimal.so 00:21:28.801 ----------------------------------------------------- 00:21:28.801 00:21:28.801 00:05:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:29.060 00:05:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:21:29.060 00:05:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:21:29.060 00:05:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:21:29.060 00:05:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # bdfs=() 00:21:29.060 00:05:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # local bdfs 00:21:29.060 00:05:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:21:29.060 00:05:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:21:29.060 00:05:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:21:29.319 00:05:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:21:29.319 00:05:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:21:29.319 00:05:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 -i 10.0.0.3 00:21:29.578 Nvme0n1 00:21:29.578 00:05:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:21:29.836 00:05:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=266689ed-ff1c-4ef6-aa39-3e4939527852 00:21:29.836 00:05:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb 266689ed-ff1c-4ef6-aa39-3e4939527852 00:21:29.836 00:05:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=266689ed-ff1c-4ef6-aa39-3e4939527852 00:21:29.836 00:05:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info 00:21:29.836 00:05:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc 00:21:29.836 00:05:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs 00:21:29.836 00:05:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:30.095 00:05:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:21:30.095 { 00:21:30.095 "uuid": "266689ed-ff1c-4ef6-aa39-3e4939527852", 00:21:30.095 "name": "lvs_0", 00:21:30.095 "base_bdev": "Nvme0n1", 00:21:30.095 "total_data_clusters": 4, 00:21:30.095 "free_clusters": 4, 00:21:30.095 "block_size": 4096, 00:21:30.095 "cluster_size": 1073741824 00:21:30.095 } 00:21:30.095 ]' 00:21:30.095 00:05:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="266689ed-ff1c-4ef6-aa39-3e4939527852") .free_clusters' 00:21:30.095 00:05:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=4 00:21:30.095 00:05:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="266689ed-ff1c-4ef6-aa39-3e4939527852") .cluster_size' 00:21:30.095 00:05:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=1073741824 00:21:30.095 00:05:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=4096 00:21:30.095 4096 00:21:30.095 00:05:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 4096 00:21:30.095 00:05:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 4096 00:21:30.354 5cbe69b6-2e83-4ea9-a872-c1b1503eae1b 00:21:30.354 00:05:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:21:30.613 00:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:21:30.872 00:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:21:31.130 00:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:21:31.130 00:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:21:31.130 00:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:31.130 00:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:31.130 00:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:31.130 00:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:31.130 00:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:21:31.130 00:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:31.130 00:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:31.130 00:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:31.130 00:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:31.130 00:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:21:31.130 00:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:21:31.130 00:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:21:31.130 00:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1351 -- # break 00:21:31.131 00:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:21:31.131 00:05:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:21:31.389 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:21:31.389 fio-3.35 00:21:31.389 Starting 1 thread 00:21:33.921 00:21:33.921 test: (groupid=0, jobs=1): err= 0: pid=80807: Tue Nov 19 00:05:40 2024 00:21:33.921 read: IOPS=5064, BW=19.8MiB/s (20.7MB/s)(39.8MiB/2010msec) 00:21:33.921 slat (usec): min=2, max=194, avg= 3.44, stdev= 3.60 00:21:33.921 clat (usec): min=3385, max=23721, avg=13173.88, stdev=1159.04 00:21:33.921 lat (usec): min=3391, max=23724, avg=13177.31, stdev=1158.82 00:21:33.921 clat percentiles (usec): 00:21:33.921 | 1.00th=[10814], 5.00th=[11600], 10.00th=[11863], 20.00th=[12256], 00:21:33.921 | 30.00th=[12649], 40.00th=[12911], 50.00th=[13173], 60.00th=[13435], 00:21:33.921 | 70.00th=[13698], 80.00th=[13960], 90.00th=[14484], 95.00th=[14877], 00:21:33.921 | 99.00th=[15926], 99.50th=[16712], 99.90th=[21890], 99.95th=[23462], 00:21:33.921 | 99.99th=[23725] 00:21:33.921 bw ( KiB/s): min=19216, max=20584, per=99.83%, avg=20222.00, stdev=670.93, samples=4 00:21:33.921 iops : min= 4804, max= 5146, avg=5055.50, stdev=167.73, samples=4 00:21:33.921 write: IOPS=5056, BW=19.8MiB/s (20.7MB/s)(39.7MiB/2010msec); 0 zone resets 00:21:33.921 slat (usec): min=2, max=156, avg= 3.60, stdev= 2.85 00:21:33.921 clat (usec): min=2211, max=21890, avg=11958.16, stdev=1063.50 00:21:33.921 lat (usec): min=2221, max=21893, avg=11961.76, stdev=1063.39 00:21:33.921 clat percentiles (usec): 00:21:33.921 | 1.00th=[ 9765], 5.00th=[10421], 10.00th=[10814], 20.00th=[11207], 00:21:33.921 | 30.00th=[11469], 40.00th=[11731], 50.00th=[11994], 60.00th=[12125], 00:21:33.921 | 70.00th=[12387], 80.00th=[12649], 90.00th=[13173], 95.00th=[13566], 00:21:33.921 | 99.00th=[14484], 99.50th=[15008], 99.90th=[18744], 99.95th=[20055], 00:21:33.921 | 99.99th=[21627] 00:21:33.921 bw ( KiB/s): min=20096, max=20416, per=99.92%, avg=20210.00, stdev=149.29, samples=4 00:21:33.921 iops : min= 5024, max= 5104, avg=5052.50, stdev=37.32, samples=4 00:21:33.921 lat (msec) : 4=0.05%, 10=0.88%, 20=98.95%, 50=0.11% 00:21:33.921 cpu : usr=76.16%, sys=18.62%, ctx=3, majf=0, minf=1554 00:21:33.921 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:21:33.921 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:33.921 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:33.921 issued rwts: total=10179,10164,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:33.921 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:33.921 00:21:33.921 Run status group 0 (all jobs): 00:21:33.921 READ: bw=19.8MiB/s (20.7MB/s), 19.8MiB/s-19.8MiB/s (20.7MB/s-20.7MB/s), io=39.8MiB (41.7MB), run=2010-2010msec 00:21:33.922 WRITE: bw=19.8MiB/s (20.7MB/s), 19.8MiB/s-19.8MiB/s (20.7MB/s-20.7MB/s), io=39.7MiB (41.6MB), run=2010-2010msec 00:21:33.922 ----------------------------------------------------- 00:21:33.922 Suppressions used: 00:21:33.922 count bytes template 00:21:33.922 1 58 /usr/src/fio/parse.c 00:21:33.922 1 8 libtcmalloc_minimal.so 00:21:33.922 ----------------------------------------------------- 00:21:33.922 00:21:33.922 00:05:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:34.180 00:05:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:21:34.439 00:05:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=c617f38f-f898-4437-bc0b-28a8e70a5ff4 00:21:34.439 00:05:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb c617f38f-f898-4437-bc0b-28a8e70a5ff4 00:21:34.439 00:05:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=c617f38f-f898-4437-bc0b-28a8e70a5ff4 00:21:34.439 00:05:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info 00:21:34.439 00:05:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc 00:21:34.439 00:05:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs 00:21:34.439 00:05:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:34.697 00:05:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:21:34.697 { 00:21:34.697 "uuid": "266689ed-ff1c-4ef6-aa39-3e4939527852", 00:21:34.697 "name": "lvs_0", 00:21:34.697 "base_bdev": "Nvme0n1", 00:21:34.697 "total_data_clusters": 4, 00:21:34.697 "free_clusters": 0, 00:21:34.697 "block_size": 4096, 00:21:34.697 "cluster_size": 1073741824 00:21:34.697 }, 00:21:34.697 { 00:21:34.697 "uuid": "c617f38f-f898-4437-bc0b-28a8e70a5ff4", 00:21:34.697 "name": "lvs_n_0", 00:21:34.697 "base_bdev": "5cbe69b6-2e83-4ea9-a872-c1b1503eae1b", 00:21:34.697 "total_data_clusters": 1022, 00:21:34.697 "free_clusters": 1022, 00:21:34.697 "block_size": 4096, 00:21:34.697 "cluster_size": 4194304 00:21:34.697 } 00:21:34.697 ]' 00:21:34.697 00:05:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="c617f38f-f898-4437-bc0b-28a8e70a5ff4") .free_clusters' 00:21:34.697 00:05:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=1022 00:21:34.955 00:05:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="c617f38f-f898-4437-bc0b-28a8e70a5ff4") .cluster_size' 00:21:34.955 00:05:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=4194304 00:21:34.955 00:05:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=4088 00:21:34.955 4088 00:21:34.955 00:05:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 4088 00:21:34.955 00:05:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 4088 00:21:35.214 24a90199-219d-4fda-bb41-b606f2c5e80c 00:21:35.214 00:05:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:21:35.214 00:05:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:21:35.473 00:05:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.3 -s 4420 00:21:35.731 00:05:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:21:35.731 00:05:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:21:35.731 00:05:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:35.731 00:05:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:35.731 00:05:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:35.731 00:05:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:35.731 00:05:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:21:35.731 00:05:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:35.731 00:05:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:35.731 00:05:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:21:35.731 00:05:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:35.731 00:05:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:35.731 00:05:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:21:35.731 00:05:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:21:35.731 00:05:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1351 -- # break 00:21:35.731 00:05:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:21:35.731 00:05:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:21:35.990 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:21:35.990 fio-3.35 00:21:35.990 Starting 1 thread 00:21:38.521 00:21:38.521 test: (groupid=0, jobs=1): err= 0: pid=80885: Tue Nov 19 00:05:44 2024 00:21:38.521 read: IOPS=4517, BW=17.6MiB/s (18.5MB/s)(35.5MiB/2012msec) 00:21:38.521 slat (usec): min=2, max=349, avg= 3.92, stdev= 5.36 00:21:38.521 clat (usec): min=4274, max=27107, avg=14771.42, stdev=1290.12 00:21:38.521 lat (usec): min=4285, max=27110, avg=14775.34, stdev=1289.58 00:21:38.521 clat percentiles (usec): 00:21:38.521 | 1.00th=[12125], 5.00th=[12911], 10.00th=[13304], 20.00th=[13829], 00:21:38.521 | 30.00th=[14222], 40.00th=[14484], 50.00th=[14746], 60.00th=[15008], 00:21:38.521 | 70.00th=[15270], 80.00th=[15664], 90.00th=[16188], 95.00th=[16712], 00:21:38.521 | 99.00th=[17433], 99.50th=[18220], 99.90th=[24773], 99.95th=[26346], 00:21:38.521 | 99.99th=[27132] 00:21:38.521 bw ( KiB/s): min=17368, max=18416, per=99.98%, avg=18066.00, stdev=487.08, samples=4 00:21:38.521 iops : min= 4342, max= 4604, avg=4516.50, stdev=121.77, samples=4 00:21:38.521 write: IOPS=4524, BW=17.7MiB/s (18.5MB/s)(35.6MiB/2012msec); 0 zone resets 00:21:38.521 slat (usec): min=2, max=413, avg= 4.21, stdev= 5.26 00:21:38.521 clat (usec): min=3489, max=26137, avg=13429.54, stdev=1248.65 00:21:38.521 lat (usec): min=3507, max=26140, avg=13433.74, stdev=1248.33 00:21:38.521 clat percentiles (usec): 00:21:38.521 | 1.00th=[10945], 5.00th=[11731], 10.00th=[12125], 20.00th=[12518], 00:21:38.521 | 30.00th=[12911], 40.00th=[13173], 50.00th=[13435], 60.00th=[13698], 00:21:38.521 | 70.00th=[13960], 80.00th=[14222], 90.00th=[14746], 95.00th=[15139], 00:21:38.521 | 99.00th=[16057], 99.50th=[17957], 99.90th=[24773], 99.95th=[25822], 00:21:38.521 | 99.99th=[26084] 00:21:38.521 bw ( KiB/s): min=17792, max=18312, per=99.82%, avg=18066.00, stdev=226.87, samples=4 00:21:38.521 iops : min= 4448, max= 4578, avg=4516.50, stdev=56.72, samples=4 00:21:38.521 lat (msec) : 4=0.01%, 10=0.35%, 20=99.36%, 50=0.28% 00:21:38.522 cpu : usr=73.60%, sys=20.44%, ctx=4, majf=0, minf=1554 00:21:38.522 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:21:38.522 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:38.522 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:38.522 issued rwts: total=9089,9104,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:38.522 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:38.522 00:21:38.522 Run status group 0 (all jobs): 00:21:38.522 READ: bw=17.6MiB/s (18.5MB/s), 17.6MiB/s-17.6MiB/s (18.5MB/s-18.5MB/s), io=35.5MiB (37.2MB), run=2012-2012msec 00:21:38.522 WRITE: bw=17.7MiB/s (18.5MB/s), 17.7MiB/s-17.7MiB/s (18.5MB/s-18.5MB/s), io=35.6MiB (37.3MB), run=2012-2012msec 00:21:38.781 ----------------------------------------------------- 00:21:38.781 Suppressions used: 00:21:38.781 count bytes template 00:21:38.781 1 58 /usr/src/fio/parse.c 00:21:38.781 1 8 libtcmalloc_minimal.so 00:21:38.781 ----------------------------------------------------- 00:21:38.781 00:21:38.781 00:05:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:21:39.040 00:05:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:21:39.040 00:05:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:21:39.299 00:05:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:21:39.557 00:05:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:21:39.816 00:05:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:21:40.075 00:05:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:21:40.642 00:05:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:21:40.642 00:05:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:21:40.642 00:05:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:21:40.642 00:05:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:40.642 00:05:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:21:40.642 00:05:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:40.642 00:05:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:21:40.642 00:05:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:40.642 00:05:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:40.642 rmmod nvme_tcp 00:21:40.642 rmmod nvme_fabrics 00:21:40.642 rmmod nvme_keyring 00:21:40.642 00:05:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:40.642 00:05:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:21:40.642 00:05:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:21:40.642 00:05:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 80586 ']' 00:21:40.642 00:05:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 80586 00:21:40.642 00:05:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 80586 ']' 00:21:40.642 00:05:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 80586 00:21:40.642 00:05:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:21:40.642 00:05:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:40.642 00:05:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80586 00:21:40.900 killing process with pid 80586 00:21:40.900 00:05:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:40.900 00:05:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:40.900 00:05:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80586' 00:21:40.900 00:05:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 80586 00:21:40.901 00:05:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 80586 00:21:41.837 00:05:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:41.837 00:05:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:41.837 00:05:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:41.837 00:05:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:21:41.837 00:05:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:41.837 00:05:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:21:41.837 00:05:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:21:41.837 00:05:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:41.837 00:05:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:41.837 00:05:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:41.837 00:05:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:41.837 00:05:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:41.837 00:05:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:41.837 00:05:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:41.837 00:05:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:41.837 00:05:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:41.837 00:05:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:41.837 00:05:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:41.837 00:05:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:41.837 00:05:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:42.097 00:05:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:42.097 00:05:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:42.097 00:05:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:42.097 00:05:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:42.097 00:05:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:42.097 00:05:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:42.097 00:05:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@300 -- # return 0 00:21:42.097 00:21:42.097 real 0m22.246s 00:21:42.097 user 1m35.204s 00:21:42.097 sys 0m4.854s 00:21:42.097 00:05:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:42.097 00:05:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:42.097 ************************************ 00:21:42.097 END TEST nvmf_fio_host 00:21:42.097 ************************************ 00:21:42.097 00:05:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:21:42.097 00:05:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:42.097 00:05:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:42.097 00:05:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:42.097 ************************************ 00:21:42.097 START TEST nvmf_failover 00:21:42.097 ************************************ 00:21:42.097 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:21:42.097 * Looking for test storage... 00:21:42.097 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:42.097 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:42.097 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:21:42.097 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:42.357 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:42.357 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:42.357 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:42.357 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:42.357 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:21:42.357 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:21:42.357 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:21:42.357 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:21:42.357 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:21:42.357 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:21:42.357 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:21:42.357 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:42.357 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:21:42.357 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:21:42.357 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:42.357 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:42.357 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:21:42.357 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:21:42.357 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:42.357 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:21:42.357 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:21:42.357 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:21:42.357 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:21:42.357 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:42.357 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:21:42.357 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:21:42.357 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:42.357 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:42.357 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:21:42.357 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:42.357 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:42.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:42.357 --rc genhtml_branch_coverage=1 00:21:42.357 --rc genhtml_function_coverage=1 00:21:42.357 --rc genhtml_legend=1 00:21:42.357 --rc geninfo_all_blocks=1 00:21:42.357 --rc geninfo_unexecuted_blocks=1 00:21:42.357 00:21:42.357 ' 00:21:42.357 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:42.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:42.357 --rc genhtml_branch_coverage=1 00:21:42.357 --rc genhtml_function_coverage=1 00:21:42.357 --rc genhtml_legend=1 00:21:42.357 --rc geninfo_all_blocks=1 00:21:42.357 --rc geninfo_unexecuted_blocks=1 00:21:42.357 00:21:42.357 ' 00:21:42.357 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:42.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:42.357 --rc genhtml_branch_coverage=1 00:21:42.357 --rc genhtml_function_coverage=1 00:21:42.357 --rc genhtml_legend=1 00:21:42.357 --rc geninfo_all_blocks=1 00:21:42.357 --rc geninfo_unexecuted_blocks=1 00:21:42.357 00:21:42.357 ' 00:21:42.357 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:42.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:42.357 --rc genhtml_branch_coverage=1 00:21:42.357 --rc genhtml_function_coverage=1 00:21:42.357 --rc genhtml_legend=1 00:21:42.357 --rc geninfo_all_blocks=1 00:21:42.357 --rc geninfo_unexecuted_blocks=1 00:21:42.357 00:21:42.357 ' 00:21:42.357 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:42.357 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:21:42.357 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:42.357 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:42.358 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:42.358 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:42.358 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:42.358 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:42.358 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:42.358 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:42.358 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:42.358 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:42.358 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:21:42.358 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:21:42.358 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:42.358 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:42.358 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:42.358 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:42.358 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:42.358 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:21:42.358 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:42.358 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:42.358 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:42.358 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:42.358 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:42.358 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:42.358 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:21:42.358 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:42.358 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:21:42.358 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:42.358 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:42.358 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:42.358 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:42.358 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:42.358 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:42.358 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:42.358 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:42.358 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:42.358 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:42.358 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:42.358 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:42.358 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:42.358 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:42.358 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:21:42.358 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:42.358 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:42.358 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:42.358 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:42.358 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:42.358 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:42.358 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:42.358 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:42.358 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:21:42.358 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:21:42.358 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:21:42.358 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:21:42.358 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:21:42.358 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@460 -- # nvmf_veth_init 00:21:42.358 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:42.358 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:21:42.358 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:21:42.358 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:42.358 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:42.358 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:21:42.358 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:42.358 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:21:42.358 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:42.358 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:21:42.358 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:42.358 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:42.358 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:42.358 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:42.358 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:42.358 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:42.358 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:21:42.358 Cannot find device "nvmf_init_br" 00:21:42.358 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # true 00:21:42.358 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:21:42.358 Cannot find device "nvmf_init_br2" 00:21:42.358 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # true 00:21:42.358 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:21:42.358 Cannot find device "nvmf_tgt_br" 00:21:42.358 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # true 00:21:42.359 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:21:42.359 Cannot find device "nvmf_tgt_br2" 00:21:42.359 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # true 00:21:42.359 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:21:42.359 Cannot find device "nvmf_init_br" 00:21:42.359 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # true 00:21:42.359 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:21:42.359 Cannot find device "nvmf_init_br2" 00:21:42.359 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # true 00:21:42.359 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:21:42.359 Cannot find device "nvmf_tgt_br" 00:21:42.359 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # true 00:21:42.359 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:21:42.359 Cannot find device "nvmf_tgt_br2" 00:21:42.359 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # true 00:21:42.359 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:21:42.359 Cannot find device "nvmf_br" 00:21:42.359 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # true 00:21:42.359 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:21:42.359 Cannot find device "nvmf_init_if" 00:21:42.359 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # true 00:21:42.359 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:21:42.359 Cannot find device "nvmf_init_if2" 00:21:42.359 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # true 00:21:42.359 00:05:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:42.359 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:42.359 00:05:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # true 00:21:42.359 00:05:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:42.359 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:42.359 00:05:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # true 00:21:42.359 00:05:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:21:42.359 00:05:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:42.359 00:05:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:21:42.359 00:05:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:42.359 00:05:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:42.618 00:05:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:42.618 00:05:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:42.618 00:05:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:42.618 00:05:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:21:42.618 00:05:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:42.618 00:05:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:21:42.618 00:05:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:21:42.618 00:05:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:21:42.618 00:05:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:21:42.618 00:05:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:21:42.618 00:05:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:21:42.618 00:05:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:21:42.618 00:05:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:42.618 00:05:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:42.618 00:05:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:42.618 00:05:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:21:42.618 00:05:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:21:42.618 00:05:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:21:42.618 00:05:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:21:42.618 00:05:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:42.618 00:05:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:42.618 00:05:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:42.618 00:05:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:21:42.618 00:05:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:21:42.618 00:05:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:21:42.618 00:05:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:42.618 00:05:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:42.618 00:05:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:21:42.618 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:42.618 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:21:42.618 00:21:42.618 --- 10.0.0.3 ping statistics --- 00:21:42.618 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:42.618 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:21:42.618 00:05:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:21:42.618 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:42.618 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.065 ms 00:21:42.618 00:21:42.618 --- 10.0.0.4 ping statistics --- 00:21:42.618 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:42.618 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:21:42.618 00:05:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:42.619 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:42.619 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:21:42.619 00:21:42.619 --- 10.0.0.1 ping statistics --- 00:21:42.619 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:42.619 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:21:42.619 00:05:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:21:42.619 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:42.619 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:21:42.619 00:21:42.619 --- 10.0.0.2 ping statistics --- 00:21:42.619 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:42.619 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:21:42.619 00:05:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:42.619 00:05:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@461 -- # return 0 00:21:42.619 00:05:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:42.619 00:05:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:42.619 00:05:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:42.619 00:05:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:42.619 00:05:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:42.619 00:05:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:42.619 00:05:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:42.619 00:05:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:21:42.619 00:05:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:42.619 00:05:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:42.619 00:05:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:42.619 00:05:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=81187 00:21:42.619 00:05:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:42.619 00:05:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 81187 00:21:42.619 00:05:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 81187 ']' 00:21:42.619 00:05:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:42.619 00:05:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:42.619 00:05:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:42.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:42.619 00:05:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:42.619 00:05:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:42.877 [2024-11-19 00:05:49.382994] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:21:42.877 [2024-11-19 00:05:49.383155] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:43.135 [2024-11-19 00:05:49.571904] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:43.135 [2024-11-19 00:05:49.698960] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:43.135 [2024-11-19 00:05:49.699028] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:43.135 [2024-11-19 00:05:49.699052] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:43.135 [2024-11-19 00:05:49.699067] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:43.135 [2024-11-19 00:05:49.699086] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:43.135 [2024-11-19 00:05:49.701229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:43.135 [2024-11-19 00:05:49.701366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:43.135 [2024-11-19 00:05:49.701382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:43.392 [2024-11-19 00:05:49.898038] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:43.959 00:05:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:43.959 00:05:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:21:43.959 00:05:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:43.959 00:05:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:43.959 00:05:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:43.959 00:05:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:43.959 00:05:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:43.959 [2024-11-19 00:05:50.625243] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:43.959 00:05:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:44.526 Malloc0 00:21:44.526 00:05:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:44.784 00:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:44.784 00:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:45.043 [2024-11-19 00:05:51.683502] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:45.043 00:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:21:45.301 [2024-11-19 00:05:51.919675] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:21:45.301 00:05:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:21:45.560 [2024-11-19 00:05:52.139922] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:21:45.560 00:05:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:21:45.560 00:05:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=81245 00:21:45.560 00:05:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:45.560 00:05:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 81245 /var/tmp/bdevperf.sock 00:21:45.560 00:05:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 81245 ']' 00:21:45.560 00:05:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:45.560 00:05:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:45.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:45.560 00:05:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:45.560 00:05:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:45.560 00:05:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:46.935 00:05:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:46.935 00:05:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:21:46.935 00:05:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:21:46.935 NVMe0n1 00:21:46.935 00:05:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:21:47.193 00:21:47.193 00:05:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=81267 00:21:47.193 00:05:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:47.193 00:05:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:21:48.569 00:05:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:48.569 [2024-11-19 00:05:55.132928] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:21:48.569 [2024-11-19 00:05:55.132988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:21:48.569 [2024-11-19 00:05:55.133003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:21:48.569 [2024-11-19 00:05:55.133016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:21:48.569 [2024-11-19 00:05:55.133026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:21:48.569 [2024-11-19 00:05:55.133038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:21:48.569 [2024-11-19 00:05:55.133049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:21:48.569 [2024-11-19 00:05:55.133061] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:21:48.569 [2024-11-19 00:05:55.133071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:21:48.569 [2024-11-19 00:05:55.133085] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:21:48.569 [2024-11-19 00:05:55.133096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:21:48.569 [2024-11-19 00:05:55.133107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:21:48.569 [2024-11-19 00:05:55.133118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:21:48.569 [2024-11-19 00:05:55.133129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:21:48.569 [2024-11-19 00:05:55.133139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:21:48.569 [2024-11-19 00:05:55.133151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:21:48.569 [2024-11-19 00:05:55.133161] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:21:48.569 [2024-11-19 00:05:55.133176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:21:48.569 [2024-11-19 00:05:55.133187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:21:48.569 [2024-11-19 00:05:55.133199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:21:48.569 [2024-11-19 00:05:55.133209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:21:48.569 [2024-11-19 00:05:55.133220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:21:48.569 [2024-11-19 00:05:55.133231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:21:48.569 [2024-11-19 00:05:55.133243] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:21:48.569 [2024-11-19 00:05:55.133253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:21:48.569 [2024-11-19 00:05:55.133266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:21:48.569 [2024-11-19 00:05:55.133276] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:21:48.569 [2024-11-19 00:05:55.133301] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:21:48.569 [2024-11-19 00:05:55.133312] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:21:48.569 [2024-11-19 00:05:55.133324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:21:48.569 [2024-11-19 00:05:55.133339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:21:48.569 [2024-11-19 00:05:55.133351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:21:48.569 [2024-11-19 00:05:55.133361] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:21:48.569 [2024-11-19 00:05:55.133373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:21:48.569 [2024-11-19 00:05:55.133384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:21:48.569 [2024-11-19 00:05:55.133396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:21:48.569 [2024-11-19 00:05:55.133407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:21:48.569 [2024-11-19 00:05:55.133419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:21:48.569 [2024-11-19 00:05:55.133429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:21:48.569 [2024-11-19 00:05:55.133440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:21:48.569 [2024-11-19 00:05:55.133451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:21:48.569 [2024-11-19 00:05:55.133464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:21:48.569 [2024-11-19 00:05:55.133475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:21:48.569 [2024-11-19 00:05:55.133488] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:21:48.570 [2024-11-19 00:05:55.133498] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:21:48.570 [2024-11-19 00:05:55.133509] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:21:48.570 [2024-11-19 00:05:55.133520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:21:48.570 [2024-11-19 00:05:55.133531] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:21:48.570 [2024-11-19 00:05:55.133542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:21:48.570 [2024-11-19 00:05:55.133553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:21:48.570 [2024-11-19 00:05:55.133563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:21:48.570 [2024-11-19 00:05:55.133575] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:21:48.570 [2024-11-19 00:05:55.133585] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:21:48.570 [2024-11-19 00:05:55.133623] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:21:48.570 [2024-11-19 00:05:55.133654] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:21:48.570 [2024-11-19 00:05:55.133667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:21:48.570 [2024-11-19 00:05:55.133677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:21:48.570 [2024-11-19 00:05:55.133692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:21:48.570 [2024-11-19 00:05:55.133703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:21:48.570 [2024-11-19 00:05:55.133715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:21:48.570 [2024-11-19 00:05:55.133726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:21:48.570 [2024-11-19 00:05:55.133738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:21:48.570 [2024-11-19 00:05:55.133751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:21:48.570 [2024-11-19 00:05:55.133763] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:21:48.570 [2024-11-19 00:05:55.133774] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:21:48.570 [2024-11-19 00:05:55.133787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:21:48.570 [2024-11-19 00:05:55.133798] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:21:48.570 [2024-11-19 00:05:55.133810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:21:48.570 [2024-11-19 00:05:55.133821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:21:48.570 [2024-11-19 00:05:55.133834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:21:48.570 [2024-11-19 00:05:55.133844] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:21:48.570 [2024-11-19 00:05:55.133858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:21:48.570 [2024-11-19 00:05:55.133869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:21:48.570 [2024-11-19 00:05:55.133884] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:21:48.570 [2024-11-19 00:05:55.133895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:21:48.570 [2024-11-19 00:05:55.133907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:21:48.570 [2024-11-19 00:05:55.133918] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:21:48.570 [2024-11-19 00:05:55.133930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:21:48.570 [2024-11-19 00:05:55.133941] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:21:48.570 [2024-11-19 00:05:55.133954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:21:48.570 [2024-11-19 00:05:55.133964] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:21:48.570 [2024-11-19 00:05:55.133977] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:21:48.570 [2024-11-19 00:05:55.133987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:21:48.570 [2024-11-19 00:05:55.134000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:21:48.570 [2024-11-19 00:05:55.134039] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:21:48.570 [2024-11-19 00:05:55.134051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:21:48.570 [2024-11-19 00:05:55.134061] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:21:48.570 [2024-11-19 00:05:55.134073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:21:48.570 [2024-11-19 00:05:55.134083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:21:48.570 [2024-11-19 00:05:55.134096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:21:48.570 [2024-11-19 00:05:55.134107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:21:48.570 [2024-11-19 00:05:55.134123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:21:48.570 [2024-11-19 00:05:55.134134] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:21:48.570 [2024-11-19 00:05:55.134145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:21:48.570 [2024-11-19 00:05:55.134157] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:21:48.570 [2024-11-19 00:05:55.134169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:21:48.570 [2024-11-19 00:05:55.134179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:21:48.570 [2024-11-19 00:05:55.134192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:21:48.570 [2024-11-19 00:05:55.134203] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:21:48.570 [2024-11-19 00:05:55.134214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:21:48.570 [2024-11-19 00:05:55.134225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:21:48.570 [2024-11-19 00:05:55.134236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:21:48.570 [2024-11-19 00:05:55.134247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:21:48.570 [2024-11-19 00:05:55.134258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:21:48.570 [2024-11-19 00:05:55.134268] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:21:48.570 [2024-11-19 00:05:55.134283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:21:48.570 [2024-11-19 00:05:55.134293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:21:48.570 [2024-11-19 00:05:55.134305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:21:48.570 [2024-11-19 00:05:55.134315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:21:48.570 00:05:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:21:51.856 00:05:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:21:51.856 00:21:51.856 00:05:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:21:52.115 00:05:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:21:55.404 00:06:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:55.404 [2024-11-19 00:06:02.018987] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:55.404 00:06:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:21:56.781 00:06:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:21:56.781 00:06:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 81267 00:22:03.369 { 00:22:03.369 "results": [ 00:22:03.369 { 00:22:03.369 "job": "NVMe0n1", 00:22:03.369 "core_mask": "0x1", 00:22:03.369 "workload": "verify", 00:22:03.369 "status": "finished", 00:22:03.369 "verify_range": { 00:22:03.369 "start": 0, 00:22:03.369 "length": 16384 00:22:03.369 }, 00:22:03.369 "queue_depth": 128, 00:22:03.369 "io_size": 4096, 00:22:03.369 "runtime": 15.009912, 00:22:03.369 "iops": 7991.918939964471, 00:22:03.369 "mibps": 31.218433359236215, 00:22:03.369 "io_failed": 3589, 00:22:03.369 "io_timeout": 0, 00:22:03.369 "avg_latency_us": 15519.099564700073, 00:22:03.369 "min_latency_us": 688.8727272727273, 00:22:03.369 "max_latency_us": 22401.396363636362 00:22:03.369 } 00:22:03.369 ], 00:22:03.369 "core_count": 1 00:22:03.369 } 00:22:03.369 00:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 81245 00:22:03.369 00:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 81245 ']' 00:22:03.369 00:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 81245 00:22:03.369 00:06:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:22:03.369 00:06:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:03.369 00:06:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81245 00:22:03.369 killing process with pid 81245 00:22:03.369 00:06:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:03.369 00:06:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:03.369 00:06:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81245' 00:22:03.369 00:06:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 81245 00:22:03.369 00:06:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 81245 00:22:03.369 00:06:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:03.369 [2024-11-19 00:05:52.238911] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:22:03.369 [2024-11-19 00:05:52.239075] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81245 ] 00:22:03.369 [2024-11-19 00:05:52.414966] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:03.369 [2024-11-19 00:05:52.538700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:03.369 [2024-11-19 00:05:52.715810] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:03.369 Running I/O for 15 seconds... 00:22:03.369 6427.00 IOPS, 25.11 MiB/s [2024-11-19T00:06:10.061Z] [2024-11-19 00:05:55.134400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:58712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.369 [2024-11-19 00:05:55.134467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.369 [2024-11-19 00:05:55.134515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:58720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.369 [2024-11-19 00:05:55.134540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.369 [2024-11-19 00:05:55.134562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:58728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.369 [2024-11-19 00:05:55.134583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.369 [2024-11-19 00:05:55.134617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:58736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.369 [2024-11-19 00:05:55.134640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.370 [2024-11-19 00:05:55.134660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:58744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.370 [2024-11-19 00:05:55.134679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.370 [2024-11-19 00:05:55.134698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:58752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.370 [2024-11-19 00:05:55.134722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.370 [2024-11-19 00:05:55.134742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:58760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.370 [2024-11-19 00:05:55.134762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.370 [2024-11-19 00:05:55.134781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:58768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.370 [2024-11-19 00:05:55.134800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.370 [2024-11-19 00:05:55.134819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:58776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.370 [2024-11-19 00:05:55.134838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.370 [2024-11-19 00:05:55.134857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:58784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.370 [2024-11-19 00:05:55.134876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.370 [2024-11-19 00:05:55.134895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:58792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.370 [2024-11-19 00:05:55.134937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.370 [2024-11-19 00:05:55.134959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:58800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.370 [2024-11-19 00:05:55.134979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.370 [2024-11-19 00:05:55.134998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:58808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.370 [2024-11-19 00:05:55.135017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.370 [2024-11-19 00:05:55.135036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:58816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.370 [2024-11-19 00:05:55.135057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.370 [2024-11-19 00:05:55.135077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:58824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.370 [2024-11-19 00:05:55.135096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.370 [2024-11-19 00:05:55.135115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:58832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.370 [2024-11-19 00:05:55.135134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.370 [2024-11-19 00:05:55.135153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:58840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.370 [2024-11-19 00:05:55.135177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.370 [2024-11-19 00:05:55.135197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:58848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.370 [2024-11-19 00:05:55.135219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.370 [2024-11-19 00:05:55.135238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:58856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.370 [2024-11-19 00:05:55.135258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.370 [2024-11-19 00:05:55.135277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:58864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.370 [2024-11-19 00:05:55.135296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.370 [2024-11-19 00:05:55.135315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:58872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.370 [2024-11-19 00:05:55.135334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.370 [2024-11-19 00:05:55.135353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:58880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.370 [2024-11-19 00:05:55.135374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.370 [2024-11-19 00:05:55.135393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:58888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.370 [2024-11-19 00:05:55.135413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.370 [2024-11-19 00:05:55.135440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:58896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.370 [2024-11-19 00:05:55.135461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.370 [2024-11-19 00:05:55.135480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:58904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.370 [2024-11-19 00:05:55.135499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.370 [2024-11-19 00:05:55.135519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:58912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.370 [2024-11-19 00:05:55.135538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.370 [2024-11-19 00:05:55.135557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:58920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.370 [2024-11-19 00:05:55.135577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.370 [2024-11-19 00:05:55.135606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:58928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.370 [2024-11-19 00:05:55.135629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.370 [2024-11-19 00:05:55.135650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:58936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.370 [2024-11-19 00:05:55.135669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.370 [2024-11-19 00:05:55.135688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:58944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.370 [2024-11-19 00:05:55.135711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.370 [2024-11-19 00:05:55.135730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:58952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.370 [2024-11-19 00:05:55.135750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.370 [2024-11-19 00:05:55.135769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:58960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.370 [2024-11-19 00:05:55.135788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.370 [2024-11-19 00:05:55.135807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:58968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.370 [2024-11-19 00:05:55.135828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.370 [2024-11-19 00:05:55.135848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:58976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.370 [2024-11-19 00:05:55.135867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.370 [2024-11-19 00:05:55.135886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:58984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.370 [2024-11-19 00:05:55.135905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.370 [2024-11-19 00:05:55.135924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:58992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.370 [2024-11-19 00:05:55.135951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.370 [2024-11-19 00:05:55.135973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:59000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.370 [2024-11-19 00:05:55.135992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.370 [2024-11-19 00:05:55.136011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:59008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.370 [2024-11-19 00:05:55.136032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.370 [2024-11-19 00:05:55.136051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:59016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.370 [2024-11-19 00:05:55.136071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.370 [2024-11-19 00:05:55.136090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:59024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.370 [2024-11-19 00:05:55.136109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.370 [2024-11-19 00:05:55.136127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:59032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.370 [2024-11-19 00:05:55.136146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.370 [2024-11-19 00:05:55.136165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:59040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.370 [2024-11-19 00:05:55.136184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.370 [2024-11-19 00:05:55.136202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:59048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.370 [2024-11-19 00:05:55.136248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.370 [2024-11-19 00:05:55.136289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:59056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.371 [2024-11-19 00:05:55.136327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.371 [2024-11-19 00:05:55.136349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:59064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.371 [2024-11-19 00:05:55.136369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.371 [2024-11-19 00:05:55.136390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:59072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.371 [2024-11-19 00:05:55.136412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.371 [2024-11-19 00:05:55.136433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:59080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.371 [2024-11-19 00:05:55.136454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.371 [2024-11-19 00:05:55.136474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:59088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.371 [2024-11-19 00:05:55.136495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.371 [2024-11-19 00:05:55.136515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:59096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.371 [2024-11-19 00:05:55.136547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.371 [2024-11-19 00:05:55.136596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:59104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.371 [2024-11-19 00:05:55.136627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.371 [2024-11-19 00:05:55.136647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:59112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.371 [2024-11-19 00:05:55.136688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.371 [2024-11-19 00:05:55.136712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:59120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.371 [2024-11-19 00:05:55.136733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.371 [2024-11-19 00:05:55.136753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:59128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.371 [2024-11-19 00:05:55.136773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.371 [2024-11-19 00:05:55.136793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:59136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.371 [2024-11-19 00:05:55.136816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.371 [2024-11-19 00:05:55.136836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:59144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.371 [2024-11-19 00:05:55.136856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.371 [2024-11-19 00:05:55.136876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:59152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.371 [2024-11-19 00:05:55.136895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.371 [2024-11-19 00:05:55.136915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:59160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.371 [2024-11-19 00:05:55.136934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.371 [2024-11-19 00:05:55.136955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:59168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.371 [2024-11-19 00:05:55.136975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.371 [2024-11-19 00:05:55.136994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:59176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.371 [2024-11-19 00:05:55.137019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.371 [2024-11-19 00:05:55.137053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:59184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.371 [2024-11-19 00:05:55.137073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.371 [2024-11-19 00:05:55.137092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:59192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.371 [2024-11-19 00:05:55.137110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.371 [2024-11-19 00:05:55.137137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:59200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.371 [2024-11-19 00:05:55.137159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.371 [2024-11-19 00:05:55.137179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:59208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.371 [2024-11-19 00:05:55.137198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.371 [2024-11-19 00:05:55.137217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:59216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.371 [2024-11-19 00:05:55.137236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.371 [2024-11-19 00:05:55.137255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:59224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.371 [2024-11-19 00:05:55.137276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.371 [2024-11-19 00:05:55.137296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:59232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.371 [2024-11-19 00:05:55.137315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.371 [2024-11-19 00:05:55.137333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:59240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.371 [2024-11-19 00:05:55.137354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.371 [2024-11-19 00:05:55.137374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:59248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.371 [2024-11-19 00:05:55.137393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.371 [2024-11-19 00:05:55.137412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:59256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.371 [2024-11-19 00:05:55.137431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.371 [2024-11-19 00:05:55.137449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:59264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.371 [2024-11-19 00:05:55.137470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.371 [2024-11-19 00:05:55.137489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:59272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.371 [2024-11-19 00:05:55.137508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.371 [2024-11-19 00:05:55.137527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:59280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.371 [2024-11-19 00:05:55.137545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.371 [2024-11-19 00:05:55.137564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:59288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.371 [2024-11-19 00:05:55.137583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.371 [2024-11-19 00:05:55.137602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:59296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.371 [2024-11-19 00:05:55.137658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.371 [2024-11-19 00:05:55.137681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:59304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.371 [2024-11-19 00:05:55.137704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.371 [2024-11-19 00:05:55.137724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:59312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.371 [2024-11-19 00:05:55.137744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.371 [2024-11-19 00:05:55.137763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:59320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.371 [2024-11-19 00:05:55.137782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.371 [2024-11-19 00:05:55.137802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.371 [2024-11-19 00:05:55.137824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.371 [2024-11-19 00:05:55.137843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:59336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.371 [2024-11-19 00:05:55.137865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.371 [2024-11-19 00:05:55.137884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:59344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.371 [2024-11-19 00:05:55.137904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.371 [2024-11-19 00:05:55.137923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:59352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.371 [2024-11-19 00:05:55.137944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.371 [2024-11-19 00:05:55.137964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:59360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.371 [2024-11-19 00:05:55.137983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.371 [2024-11-19 00:05:55.138017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:59368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.372 [2024-11-19 00:05:55.138036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.372 [2024-11-19 00:05:55.138055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:59376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.372 [2024-11-19 00:05:55.138074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.372 [2024-11-19 00:05:55.138092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:59384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.372 [2024-11-19 00:05:55.138111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.372 [2024-11-19 00:05:55.138130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:59392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.372 [2024-11-19 00:05:55.138151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.372 [2024-11-19 00:05:55.138177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:59400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.372 [2024-11-19 00:05:55.138198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.372 [2024-11-19 00:05:55.138217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:59408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.372 [2024-11-19 00:05:55.138235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.372 [2024-11-19 00:05:55.138254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:59416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.372 [2024-11-19 00:05:55.138273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.372 [2024-11-19 00:05:55.138291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:59424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.372 [2024-11-19 00:05:55.138310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.372 [2024-11-19 00:05:55.138328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:59432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.372 [2024-11-19 00:05:55.138349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.372 [2024-11-19 00:05:55.138387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:59440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.372 [2024-11-19 00:05:55.138408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.372 [2024-11-19 00:05:55.138427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:59448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.372 [2024-11-19 00:05:55.138446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.372 [2024-11-19 00:05:55.138465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:59456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.372 [2024-11-19 00:05:55.138487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.372 [2024-11-19 00:05:55.138506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:59464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.372 [2024-11-19 00:05:55.138525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.372 [2024-11-19 00:05:55.138544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:59472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.372 [2024-11-19 00:05:55.138564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.372 [2024-11-19 00:05:55.138583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:59480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.372 [2024-11-19 00:05:55.138604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.372 [2024-11-19 00:05:55.138640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:59488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.372 [2024-11-19 00:05:55.138676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.372 [2024-11-19 00:05:55.138697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:59496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.372 [2024-11-19 00:05:55.138725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.372 [2024-11-19 00:05:55.138748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:59504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.372 [2024-11-19 00:05:55.138768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.372 [2024-11-19 00:05:55.138788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:59512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.372 [2024-11-19 00:05:55.138808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.372 [2024-11-19 00:05:55.138827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:59520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.372 [2024-11-19 00:05:55.138849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.372 [2024-11-19 00:05:55.138869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:59528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.372 [2024-11-19 00:05:55.138888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.372 [2024-11-19 00:05:55.138908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:59536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.372 [2024-11-19 00:05:55.138930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.372 [2024-11-19 00:05:55.138950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:59544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.372 [2024-11-19 00:05:55.138970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.372 [2024-11-19 00:05:55.139005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:59552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.372 [2024-11-19 00:05:55.139024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.372 [2024-11-19 00:05:55.139043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:59560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.372 [2024-11-19 00:05:55.139064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.372 [2024-11-19 00:05:55.139085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:59568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.372 [2024-11-19 00:05:55.139117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.372 [2024-11-19 00:05:55.139137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:59576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.372 [2024-11-19 00:05:55.139157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.372 [2024-11-19 00:05:55.139176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:59584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.372 [2024-11-19 00:05:55.139198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.372 [2024-11-19 00:05:55.139218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:59592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.372 [2024-11-19 00:05:55.139237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.372 [2024-11-19 00:05:55.139257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:59616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.372 [2024-11-19 00:05:55.139284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.372 [2024-11-19 00:05:55.139305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:59624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.372 [2024-11-19 00:05:55.139328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.372 [2024-11-19 00:05:55.139348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:59632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.372 [2024-11-19 00:05:55.139368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.372 [2024-11-19 00:05:55.139388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:59640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.372 [2024-11-19 00:05:55.139407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.372 [2024-11-19 00:05:55.139427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:59648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.372 [2024-11-19 00:05:55.139448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.372 [2024-11-19 00:05:55.139468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:59656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.372 [2024-11-19 00:05:55.139488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.372 [2024-11-19 00:05:55.139507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:59664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.372 [2024-11-19 00:05:55.139530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.372 [2024-11-19 00:05:55.139549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:59672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.372 [2024-11-19 00:05:55.139569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.372 [2024-11-19 00:05:55.139588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:59680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.372 [2024-11-19 00:05:55.139607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.372 [2024-11-19 00:05:55.139659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:59688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.372 [2024-11-19 00:05:55.139681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.372 [2024-11-19 00:05:55.139701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:59696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.372 [2024-11-19 00:05:55.139722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.373 [2024-11-19 00:05:55.139742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:59704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.373 [2024-11-19 00:05:55.139765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.373 [2024-11-19 00:05:55.139785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:59712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.373 [2024-11-19 00:05:55.139805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.373 [2024-11-19 00:05:55.139833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:59720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.373 [2024-11-19 00:05:55.139855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.373 [2024-11-19 00:05:55.139875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:59728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.373 [2024-11-19 00:05:55.139897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.373 [2024-11-19 00:05:55.139917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:59600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.373 [2024-11-19 00:05:55.139935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.373 [2024-11-19 00:05:55.139953] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b780 is same with the state(6) to be set 00:22:03.373 [2024-11-19 00:05:55.139990] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.373 [2024-11-19 00:05:55.140005] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.373 [2024-11-19 00:05:55.140023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59608 len:8 PRP1 0x0 PRP2 0x0 00:22:03.373 [2024-11-19 00:05:55.140040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.373 [2024-11-19 00:05:55.140323] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:22:03.373 [2024-11-19 00:05:55.140401] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:03.373 [2024-11-19 00:05:55.140436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.373 [2024-11-19 00:05:55.140457] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:03.373 [2024-11-19 00:05:55.140475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.373 [2024-11-19 00:05:55.140493] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:03.373 [2024-11-19 00:05:55.140511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.373 [2024-11-19 00:05:55.140528] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:03.373 [2024-11-19 00:05:55.140545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.373 [2024-11-19 00:05:55.140582] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:22:03.373 [2024-11-19 00:05:55.140703] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:22:03.373 [2024-11-19 00:05:55.144507] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:03.373 [2024-11-19 00:05:55.167741] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:22:03.373 6989.00 IOPS, 27.30 MiB/s [2024-11-19T00:06:10.065Z] 7436.00 IOPS, 29.05 MiB/s [2024-11-19T00:06:10.065Z] 7662.75 IOPS, 29.93 MiB/s [2024-11-19T00:06:10.065Z] [2024-11-19 00:05:58.735041] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:03.373 [2024-11-19 00:05:58.735133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.373 [2024-11-19 00:05:58.735183] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:03.373 [2024-11-19 00:05:58.735203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.373 [2024-11-19 00:05:58.735220] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:03.373 [2024-11-19 00:05:58.735237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.373 [2024-11-19 00:05:58.735253] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:03.373 [2024-11-19 00:05:58.735268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.373 [2024-11-19 00:05:58.735283] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(6) to be set 00:22:03.373 [2024-11-19 00:05:58.735473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:41776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.373 [2024-11-19 00:05:58.735504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.373 [2024-11-19 00:05:58.735536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:41784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.373 [2024-11-19 00:05:58.735555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.373 [2024-11-19 00:05:58.735575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:41792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.373 [2024-11-19 00:05:58.735592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.373 [2024-11-19 00:05:58.735625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:41800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.373 [2024-11-19 00:05:58.735646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.373 [2024-11-19 00:05:58.735665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:41808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.373 [2024-11-19 00:05:58.735682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.373 [2024-11-19 00:05:58.735700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:41816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.373 [2024-11-19 00:05:58.735717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.373 [2024-11-19 00:05:58.735734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:41824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.373 [2024-11-19 00:05:58.735750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.373 [2024-11-19 00:05:58.735768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:41200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.373 [2024-11-19 00:05:58.735784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.373 [2024-11-19 00:05:58.735802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:41208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.373 [2024-11-19 00:05:58.735818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.373 [2024-11-19 00:05:58.735866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:41216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.373 [2024-11-19 00:05:58.735885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.373 [2024-11-19 00:05:58.735904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:41224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.373 [2024-11-19 00:05:58.735921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.373 [2024-11-19 00:05:58.735940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:41232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.373 [2024-11-19 00:05:58.735957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.373 [2024-11-19 00:05:58.735976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:41240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.373 [2024-11-19 00:05:58.735993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.373 [2024-11-19 00:05:58.736014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:41248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.373 [2024-11-19 00:05:58.736031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.373 [2024-11-19 00:05:58.736050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:41256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.373 [2024-11-19 00:05:58.736067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.373 [2024-11-19 00:05:58.736085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:41832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.373 [2024-11-19 00:05:58.736103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.373 [2024-11-19 00:05:58.736122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:41840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.373 [2024-11-19 00:05:58.736138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.373 [2024-11-19 00:05:58.736156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:41848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.373 [2024-11-19 00:05:58.736173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.373 [2024-11-19 00:05:58.736191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:41856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.373 [2024-11-19 00:05:58.736208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.373 [2024-11-19 00:05:58.736253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:41864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.373 [2024-11-19 00:05:58.736273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.373 [2024-11-19 00:05:58.736292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:41872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.373 [2024-11-19 00:05:58.736309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.373 [2024-11-19 00:05:58.736328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:41880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.374 [2024-11-19 00:05:58.736356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.374 [2024-11-19 00:05:58.736376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:41888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.374 [2024-11-19 00:05:58.736394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.374 [2024-11-19 00:05:58.736413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:41896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.374 [2024-11-19 00:05:58.736431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.374 [2024-11-19 00:05:58.736450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:41904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.374 [2024-11-19 00:05:58.736467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.374 [2024-11-19 00:05:58.736486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:41912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.374 [2024-11-19 00:05:58.736515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.374 [2024-11-19 00:05:58.736533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:41920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.374 [2024-11-19 00:05:58.736550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.374 [2024-11-19 00:05:58.736582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:41928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.374 [2024-11-19 00:05:58.736599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.374 [2024-11-19 00:05:58.736659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:41936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.374 [2024-11-19 00:05:58.736680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.374 [2024-11-19 00:05:58.736701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:41944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.374 [2024-11-19 00:05:58.736718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.374 [2024-11-19 00:05:58.736738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:41952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.374 [2024-11-19 00:05:58.736755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.374 [2024-11-19 00:05:58.736774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:41264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.374 [2024-11-19 00:05:58.736791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.374 [2024-11-19 00:05:58.736809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:41272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.374 [2024-11-19 00:05:58.736826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.374 [2024-11-19 00:05:58.736845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:41280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.374 [2024-11-19 00:05:58.736861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.374 [2024-11-19 00:05:58.736880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:41288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.374 [2024-11-19 00:05:58.736907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.374 [2024-11-19 00:05:58.736927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:41296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.374 [2024-11-19 00:05:58.736945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.374 [2024-11-19 00:05:58.736964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:41304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.374 [2024-11-19 00:05:58.736981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.374 [2024-11-19 00:05:58.736999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:41312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.374 [2024-11-19 00:05:58.737016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.374 [2024-11-19 00:05:58.737034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:41320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.374 [2024-11-19 00:05:58.737051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.374 [2024-11-19 00:05:58.737084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:41328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.374 [2024-11-19 00:05:58.737100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.374 [2024-11-19 00:05:58.737118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:41336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.374 [2024-11-19 00:05:58.737135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.374 [2024-11-19 00:05:58.737152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:41344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.374 [2024-11-19 00:05:58.737169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.374 [2024-11-19 00:05:58.737187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:41352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.374 [2024-11-19 00:05:58.737203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.374 [2024-11-19 00:05:58.737221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:41360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.374 [2024-11-19 00:05:58.737237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.374 [2024-11-19 00:05:58.737256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:41368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.374 [2024-11-19 00:05:58.737272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.374 [2024-11-19 00:05:58.737290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:41376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.374 [2024-11-19 00:05:58.737306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.374 [2024-11-19 00:05:58.737324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:41384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.374 [2024-11-19 00:05:58.737341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.374 [2024-11-19 00:05:58.737367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:41392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.374 [2024-11-19 00:05:58.737385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.374 [2024-11-19 00:05:58.737402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:41400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.374 [2024-11-19 00:05:58.737419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.374 [2024-11-19 00:05:58.737437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:41408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.374 [2024-11-19 00:05:58.737454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.374 [2024-11-19 00:05:58.737471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:41416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.374 [2024-11-19 00:05:58.737487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.374 [2024-11-19 00:05:58.737505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:41424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.374 [2024-11-19 00:05:58.737521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.374 [2024-11-19 00:05:58.737540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:41432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.374 [2024-11-19 00:05:58.737557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.374 [2024-11-19 00:05:58.737575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:41440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.374 [2024-11-19 00:05:58.737592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.375 [2024-11-19 00:05:58.737635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:41448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.375 [2024-11-19 00:05:58.737657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.375 [2024-11-19 00:05:58.737676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:41960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.375 [2024-11-19 00:05:58.737693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.375 [2024-11-19 00:05:58.737712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:41968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.375 [2024-11-19 00:05:58.737729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.375 [2024-11-19 00:05:58.737748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:41976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.375 [2024-11-19 00:05:58.737764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.375 [2024-11-19 00:05:58.737783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:41984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.375 [2024-11-19 00:05:58.737799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.375 [2024-11-19 00:05:58.737818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:41992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.375 [2024-11-19 00:05:58.737843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.375 [2024-11-19 00:05:58.737864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:42000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.375 [2024-11-19 00:05:58.737881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.375 [2024-11-19 00:05:58.737900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:42008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.375 [2024-11-19 00:05:58.737917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.375 [2024-11-19 00:05:58.737936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:42016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.375 [2024-11-19 00:05:58.737953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.375 [2024-11-19 00:05:58.737972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:42024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.375 [2024-11-19 00:05:58.737989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.375 [2024-11-19 00:05:58.738008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:41456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.375 [2024-11-19 00:05:58.738024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.375 [2024-11-19 00:05:58.738043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:41464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.375 [2024-11-19 00:05:58.738075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.375 [2024-11-19 00:05:58.738093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:41472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.375 [2024-11-19 00:05:58.738109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.375 [2024-11-19 00:05:58.738127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:41480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.375 [2024-11-19 00:05:58.738144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.375 [2024-11-19 00:05:58.738162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:41488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.375 [2024-11-19 00:05:58.738178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.375 [2024-11-19 00:05:58.738196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:41496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.375 [2024-11-19 00:05:58.738212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.375 [2024-11-19 00:05:58.738229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:41504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.375 [2024-11-19 00:05:58.738246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.375 [2024-11-19 00:05:58.738264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:41512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.375 [2024-11-19 00:05:58.738280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.375 [2024-11-19 00:05:58.738305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:41520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.375 [2024-11-19 00:05:58.738323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.375 [2024-11-19 00:05:58.738341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:41528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.375 [2024-11-19 00:05:58.738358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.375 [2024-11-19 00:05:58.738376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:41536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.375 [2024-11-19 00:05:58.738392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.375 [2024-11-19 00:05:58.738410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:41544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.375 [2024-11-19 00:05:58.738427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.375 [2024-11-19 00:05:58.738445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:41552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.375 [2024-11-19 00:05:58.738462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.375 [2024-11-19 00:05:58.738480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:41560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.375 [2024-11-19 00:05:58.738496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.375 [2024-11-19 00:05:58.738514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:41568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.375 [2024-11-19 00:05:58.738530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.375 [2024-11-19 00:05:58.738548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:41576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.375 [2024-11-19 00:05:58.738565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.375 [2024-11-19 00:05:58.738582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:42032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.375 [2024-11-19 00:05:58.738598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.375 [2024-11-19 00:05:58.738660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:42040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.375 [2024-11-19 00:05:58.738680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.375 [2024-11-19 00:05:58.738699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:42048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.375 [2024-11-19 00:05:58.738716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.375 [2024-11-19 00:05:58.738735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:42056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.375 [2024-11-19 00:05:58.738752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.375 [2024-11-19 00:05:58.738771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:42064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.375 [2024-11-19 00:05:58.738798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.375 [2024-11-19 00:05:58.738819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:42072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.375 [2024-11-19 00:05:58.738838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.375 [2024-11-19 00:05:58.738857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:42080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.375 [2024-11-19 00:05:58.738875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.375 [2024-11-19 00:05:58.738894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:42088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.375 [2024-11-19 00:05:58.738911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.375 [2024-11-19 00:05:58.738929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:42096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.375 [2024-11-19 00:05:58.738946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.375 [2024-11-19 00:05:58.738965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:42104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.375 [2024-11-19 00:05:58.738983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.375 [2024-11-19 00:05:58.739001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:42112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.375 [2024-11-19 00:05:58.739033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.375 [2024-11-19 00:05:58.739052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:42120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.375 [2024-11-19 00:05:58.739069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.375 [2024-11-19 00:05:58.739103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:42128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.375 [2024-11-19 00:05:58.739135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.375 [2024-11-19 00:05:58.739153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:42136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.376 [2024-11-19 00:05:58.739169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.376 [2024-11-19 00:05:58.739187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:42144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.376 [2024-11-19 00:05:58.739204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.376 [2024-11-19 00:05:58.739221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:42152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.376 [2024-11-19 00:05:58.739237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.376 [2024-11-19 00:05:58.739256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:41584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.376 [2024-11-19 00:05:58.739272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.376 [2024-11-19 00:05:58.739290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:41592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.376 [2024-11-19 00:05:58.739315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.376 [2024-11-19 00:05:58.739334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:41600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.376 [2024-11-19 00:05:58.739351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.376 [2024-11-19 00:05:58.739369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:41608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.376 [2024-11-19 00:05:58.739385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.376 [2024-11-19 00:05:58.739403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:41616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.376 [2024-11-19 00:05:58.739419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.376 [2024-11-19 00:05:58.739437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:41624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.376 [2024-11-19 00:05:58.739460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.376 [2024-11-19 00:05:58.739479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:41632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.376 [2024-11-19 00:05:58.739495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.376 [2024-11-19 00:05:58.739513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:41640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.376 [2024-11-19 00:05:58.739529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.376 [2024-11-19 00:05:58.739547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:42160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.376 [2024-11-19 00:05:58.739563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.376 [2024-11-19 00:05:58.739581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:42168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.376 [2024-11-19 00:05:58.739597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.376 [2024-11-19 00:05:58.739631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:42176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.376 [2024-11-19 00:05:58.739648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.376 [2024-11-19 00:05:58.739685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:42184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.376 [2024-11-19 00:05:58.739705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.376 [2024-11-19 00:05:58.739724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:42192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.376 [2024-11-19 00:05:58.739740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.376 [2024-11-19 00:05:58.739759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:42200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.376 [2024-11-19 00:05:58.739776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.376 [2024-11-19 00:05:58.739805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:42208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.376 [2024-11-19 00:05:58.739823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.376 [2024-11-19 00:05:58.739841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:42216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.376 [2024-11-19 00:05:58.739859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.376 [2024-11-19 00:05:58.739877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:41648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.376 [2024-11-19 00:05:58.739894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.376 [2024-11-19 00:05:58.739913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:41656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.376 [2024-11-19 00:05:58.739929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.376 [2024-11-19 00:05:58.739948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:41664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.376 [2024-11-19 00:05:58.739964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.376 [2024-11-19 00:05:58.739983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:41672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.376 [2024-11-19 00:05:58.740000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.376 [2024-11-19 00:05:58.740036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:41680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.376 [2024-11-19 00:05:58.740053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.376 [2024-11-19 00:05:58.740072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:41688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.376 [2024-11-19 00:05:58.740107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.376 [2024-11-19 00:05:58.740126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:41696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.376 [2024-11-19 00:05:58.740143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.376 [2024-11-19 00:05:58.740161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:41704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.376 [2024-11-19 00:05:58.740177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.376 [2024-11-19 00:05:58.740196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:41712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.376 [2024-11-19 00:05:58.740213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.376 [2024-11-19 00:05:58.740275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:41720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.376 [2024-11-19 00:05:58.740294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.376 [2024-11-19 00:05:58.740313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:41728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.376 [2024-11-19 00:05:58.740340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.376 [2024-11-19 00:05:58.740361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:41736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.376 [2024-11-19 00:05:58.740380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.376 [2024-11-19 00:05:58.740400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:41744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.376 [2024-11-19 00:05:58.740417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.376 [2024-11-19 00:05:58.740436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:41752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.376 [2024-11-19 00:05:58.740455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.376 [2024-11-19 00:05:58.740474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:41760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.376 [2024-11-19 00:05:58.740492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.376 [2024-11-19 00:05:58.740511] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ba00 is same with the state(6) to be set 00:22:03.376 [2024-11-19 00:05:58.740532] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.376 [2024-11-19 00:05:58.740549] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.376 [2024-11-19 00:05:58.740575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:41768 len:8 PRP1 0x0 PRP2 0x0 00:22:03.376 [2024-11-19 00:05:58.740607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.376 [2024-11-19 00:05:58.740873] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.3:4421 to 10.0.0.3:4422 00:22:03.376 [2024-11-19 00:05:58.740902] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:22:03.376 [2024-11-19 00:05:58.744747] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:22:03.376 [2024-11-19 00:05:58.744801] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:22:03.376 [2024-11-19 00:05:58.787443] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:22:03.376 7675.20 IOPS, 29.98 MiB/s [2024-11-19T00:06:10.068Z] 7799.33 IOPS, 30.47 MiB/s [2024-11-19T00:06:10.068Z] 7844.43 IOPS, 30.64 MiB/s [2024-11-19T00:06:10.068Z] 7867.75 IOPS, 30.73 MiB/s [2024-11-19T00:06:10.068Z] 7869.33 IOPS, 30.74 MiB/s [2024-11-19T00:06:10.068Z] [2024-11-19 00:06:03.311896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:84272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.377 [2024-11-19 00:06:03.311973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.377 [2024-11-19 00:06:03.312013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:84280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.377 [2024-11-19 00:06:03.312035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.377 [2024-11-19 00:06:03.312055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:84288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.377 [2024-11-19 00:06:03.312072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.377 [2024-11-19 00:06:03.312090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:84296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.377 [2024-11-19 00:06:03.312135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.377 [2024-11-19 00:06:03.312157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:84304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.377 [2024-11-19 00:06:03.312175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.377 [2024-11-19 00:06:03.312193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:84312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.377 [2024-11-19 00:06:03.312209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.377 [2024-11-19 00:06:03.312257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:83760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.377 [2024-11-19 00:06:03.312279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.377 [2024-11-19 00:06:03.312300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:83768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.377 [2024-11-19 00:06:03.312319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.377 [2024-11-19 00:06:03.312339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.377 [2024-11-19 00:06:03.312357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.377 [2024-11-19 00:06:03.312378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:83784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.377 [2024-11-19 00:06:03.312396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.377 [2024-11-19 00:06:03.312416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:83792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.377 [2024-11-19 00:06:03.312434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.377 [2024-11-19 00:06:03.312453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:83800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.377 [2024-11-19 00:06:03.312472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.377 [2024-11-19 00:06:03.312491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:83808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.377 [2024-11-19 00:06:03.312509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.377 [2024-11-19 00:06:03.312529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:83816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.377 [2024-11-19 00:06:03.312576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.377 [2024-11-19 00:06:03.312610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:83824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.377 [2024-11-19 00:06:03.312627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.377 [2024-11-19 00:06:03.312646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:83832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.377 [2024-11-19 00:06:03.312680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.377 [2024-11-19 00:06:03.312714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:83840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.377 [2024-11-19 00:06:03.312732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.377 [2024-11-19 00:06:03.312752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:83848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.377 [2024-11-19 00:06:03.312770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.377 [2024-11-19 00:06:03.312790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:83856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.377 [2024-11-19 00:06:03.312807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.377 [2024-11-19 00:06:03.312825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:83864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.377 [2024-11-19 00:06:03.312842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.377 [2024-11-19 00:06:03.312861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:83872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.377 [2024-11-19 00:06:03.312877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.377 [2024-11-19 00:06:03.312896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:83880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.377 [2024-11-19 00:06:03.312912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.377 [2024-11-19 00:06:03.312931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:84320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.377 [2024-11-19 00:06:03.312947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.377 [2024-11-19 00:06:03.312966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.377 [2024-11-19 00:06:03.312982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.377 [2024-11-19 00:06:03.313020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:84336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.377 [2024-11-19 00:06:03.313037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.377 [2024-11-19 00:06:03.313056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:84344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.377 [2024-11-19 00:06:03.313073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.377 [2024-11-19 00:06:03.313091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:84352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.377 [2024-11-19 00:06:03.313108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.377 [2024-11-19 00:06:03.313126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:84360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.377 [2024-11-19 00:06:03.313143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.377 [2024-11-19 00:06:03.313161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:84368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.377 [2024-11-19 00:06:03.313188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.377 [2024-11-19 00:06:03.313208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:84376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.377 [2024-11-19 00:06:03.313225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.377 [2024-11-19 00:06:03.313243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:84384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.377 [2024-11-19 00:06:03.313260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.377 [2024-11-19 00:06:03.313278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:84392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.377 [2024-11-19 00:06:03.313295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.377 [2024-11-19 00:06:03.313313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:83888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.377 [2024-11-19 00:06:03.313329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.377 [2024-11-19 00:06:03.313348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:83896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.377 [2024-11-19 00:06:03.313367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.377 [2024-11-19 00:06:03.313386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:83904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.377 [2024-11-19 00:06:03.313402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.377 [2024-11-19 00:06:03.313421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:83912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.377 [2024-11-19 00:06:03.313438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.377 [2024-11-19 00:06:03.313457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:83920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.377 [2024-11-19 00:06:03.313473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.377 [2024-11-19 00:06:03.313492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:83928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.377 [2024-11-19 00:06:03.313508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.377 [2024-11-19 00:06:03.313526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:83936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.377 [2024-11-19 00:06:03.313543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.377 [2024-11-19 00:06:03.313561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:83944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.377 [2024-11-19 00:06:03.313577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.377 [2024-11-19 00:06:03.313610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.378 [2024-11-19 00:06:03.313631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.378 [2024-11-19 00:06:03.313675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:84408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.378 [2024-11-19 00:06:03.313695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.378 [2024-11-19 00:06:03.313714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:84416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.378 [2024-11-19 00:06:03.313731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.378 [2024-11-19 00:06:03.313751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.378 [2024-11-19 00:06:03.313768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.378 [2024-11-19 00:06:03.313786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:84432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.378 [2024-11-19 00:06:03.313804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.378 [2024-11-19 00:06:03.313822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:84440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.378 [2024-11-19 00:06:03.313839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.378 [2024-11-19 00:06:03.313857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:84448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.378 [2024-11-19 00:06:03.313874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.378 [2024-11-19 00:06:03.313892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:84456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.378 [2024-11-19 00:06:03.313909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.378 [2024-11-19 00:06:03.313927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:84464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.378 [2024-11-19 00:06:03.313944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.378 [2024-11-19 00:06:03.313962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:84472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.378 [2024-11-19 00:06:03.313980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.378 [2024-11-19 00:06:03.313999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:84480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.378 [2024-11-19 00:06:03.314016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.378 [2024-11-19 00:06:03.314034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:84488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.378 [2024-11-19 00:06:03.314051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.378 [2024-11-19 00:06:03.314069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:84496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.378 [2024-11-19 00:06:03.314086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.378 [2024-11-19 00:06:03.314104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:84504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.378 [2024-11-19 00:06:03.314121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.378 [2024-11-19 00:06:03.314148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:84512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.378 [2024-11-19 00:06:03.314166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.378 [2024-11-19 00:06:03.314185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:84520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.378 [2024-11-19 00:06:03.314201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.378 [2024-11-19 00:06:03.314220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:84528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.378 [2024-11-19 00:06:03.314237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.378 [2024-11-19 00:06:03.314255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:84536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.378 [2024-11-19 00:06:03.314272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.378 [2024-11-19 00:06:03.314291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:84544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.378 [2024-11-19 00:06:03.314307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.378 [2024-11-19 00:06:03.314326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:84552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.378 [2024-11-19 00:06:03.314342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.378 [2024-11-19 00:06:03.314361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:84560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.378 [2024-11-19 00:06:03.314378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.378 [2024-11-19 00:06:03.314396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:84568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.378 [2024-11-19 00:06:03.314413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.378 [2024-11-19 00:06:03.314432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.378 [2024-11-19 00:06:03.314448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.378 [2024-11-19 00:06:03.314467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:83952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.378 [2024-11-19 00:06:03.314484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.378 [2024-11-19 00:06:03.314502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:83960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.378 [2024-11-19 00:06:03.314519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.378 [2024-11-19 00:06:03.314537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:83968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.378 [2024-11-19 00:06:03.314555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.378 [2024-11-19 00:06:03.314573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:83976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.378 [2024-11-19 00:06:03.314608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.378 [2024-11-19 00:06:03.314632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:83984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.378 [2024-11-19 00:06:03.314650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.378 [2024-11-19 00:06:03.314668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:83992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.378 [2024-11-19 00:06:03.314694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.378 [2024-11-19 00:06:03.314713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:84000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.378 [2024-11-19 00:06:03.314730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.378 [2024-11-19 00:06:03.314749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:84008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.378 [2024-11-19 00:06:03.314765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.378 [2024-11-19 00:06:03.314784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:84584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.378 [2024-11-19 00:06:03.314800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.378 [2024-11-19 00:06:03.314819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:84592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.378 [2024-11-19 00:06:03.314836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.378 [2024-11-19 00:06:03.314854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:84600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.378 [2024-11-19 00:06:03.314870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.378 [2024-11-19 00:06:03.314889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:84608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.378 [2024-11-19 00:06:03.314905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.379 [2024-11-19 00:06:03.314924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:84616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.379 [2024-11-19 00:06:03.314941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.379 [2024-11-19 00:06:03.314959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:84624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.379 [2024-11-19 00:06:03.314975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.379 [2024-11-19 00:06:03.314994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:84632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.379 [2024-11-19 00:06:03.315010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.379 [2024-11-19 00:06:03.315029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.379 [2024-11-19 00:06:03.315045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.379 [2024-11-19 00:06:03.315073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:84648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.379 [2024-11-19 00:06:03.315092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.379 [2024-11-19 00:06:03.315111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:84656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.379 [2024-11-19 00:06:03.315128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.379 [2024-11-19 00:06:03.315147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:84664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.379 [2024-11-19 00:06:03.315164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.379 [2024-11-19 00:06:03.315183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:84672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.379 [2024-11-19 00:06:03.315200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.379 [2024-11-19 00:06:03.315219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:84680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.379 [2024-11-19 00:06:03.315236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.379 [2024-11-19 00:06:03.315255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:84688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.379 [2024-11-19 00:06:03.315272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.379 [2024-11-19 00:06:03.315290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:84696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.379 [2024-11-19 00:06:03.315306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.379 [2024-11-19 00:06:03.315325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:84704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.379 [2024-11-19 00:06:03.315341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.379 [2024-11-19 00:06:03.315360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:84016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.379 [2024-11-19 00:06:03.315376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.379 [2024-11-19 00:06:03.315410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:84024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.379 [2024-11-19 00:06:03.315427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.379 [2024-11-19 00:06:03.315445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:84032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.379 [2024-11-19 00:06:03.315462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.379 [2024-11-19 00:06:03.315481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:84040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.379 [2024-11-19 00:06:03.315498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.379 [2024-11-19 00:06:03.315517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:84048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.379 [2024-11-19 00:06:03.315542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.379 [2024-11-19 00:06:03.315562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:84056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.379 [2024-11-19 00:06:03.315579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.379 [2024-11-19 00:06:03.315610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:84064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.379 [2024-11-19 00:06:03.315647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.379 [2024-11-19 00:06:03.315666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:84072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.379 [2024-11-19 00:06:03.315683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.379 [2024-11-19 00:06:03.315702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:84080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.379 [2024-11-19 00:06:03.315721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.379 [2024-11-19 00:06:03.315741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:84088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.379 [2024-11-19 00:06:03.315759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.379 [2024-11-19 00:06:03.315777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:84096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.379 [2024-11-19 00:06:03.315795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.379 [2024-11-19 00:06:03.315814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:84104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.379 [2024-11-19 00:06:03.315831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.379 [2024-11-19 00:06:03.315850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:84112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.379 [2024-11-19 00:06:03.315867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.379 [2024-11-19 00:06:03.315886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:84120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.379 [2024-11-19 00:06:03.315903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.379 [2024-11-19 00:06:03.315922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:84128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.379 [2024-11-19 00:06:03.315939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.379 [2024-11-19 00:06:03.315958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:84136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.379 [2024-11-19 00:06:03.315975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.379 [2024-11-19 00:06:03.315994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:84144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.379 [2024-11-19 00:06:03.316026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.379 [2024-11-19 00:06:03.316054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:84152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.379 [2024-11-19 00:06:03.316072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.379 [2024-11-19 00:06:03.316091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:84160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.379 [2024-11-19 00:06:03.316108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.379 [2024-11-19 00:06:03.316127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:84168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.379 [2024-11-19 00:06:03.316144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.379 [2024-11-19 00:06:03.316163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:84176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.379 [2024-11-19 00:06:03.316179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.379 [2024-11-19 00:06:03.316198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:84184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.379 [2024-11-19 00:06:03.316214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.379 [2024-11-19 00:06:03.316277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:84192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.379 [2024-11-19 00:06:03.316297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.379 [2024-11-19 00:06:03.316329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:84200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.379 [2024-11-19 00:06:03.316348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.379 [2024-11-19 00:06:03.316369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:84712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.379 [2024-11-19 00:06:03.316388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.379 [2024-11-19 00:06:03.316410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:84720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.379 [2024-11-19 00:06:03.316428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.379 [2024-11-19 00:06:03.316449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:84728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.379 [2024-11-19 00:06:03.316467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.379 [2024-11-19 00:06:03.316488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:84736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.379 [2024-11-19 00:06:03.316507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.380 [2024-11-19 00:06:03.316527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:84744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.380 [2024-11-19 00:06:03.316546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.380 [2024-11-19 00:06:03.316596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:84752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.380 [2024-11-19 00:06:03.316628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.380 [2024-11-19 00:06:03.316670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:84760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.380 [2024-11-19 00:06:03.316700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.380 [2024-11-19 00:06:03.316723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:84768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.380 [2024-11-19 00:06:03.316740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.380 [2024-11-19 00:06:03.316758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:84776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:03.380 [2024-11-19 00:06:03.316775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.380 [2024-11-19 00:06:03.316794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:84208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.380 [2024-11-19 00:06:03.316810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.380 [2024-11-19 00:06:03.316829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:84216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.380 [2024-11-19 00:06:03.316846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.380 [2024-11-19 00:06:03.316871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:84224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.380 [2024-11-19 00:06:03.316889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.380 [2024-11-19 00:06:03.316908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:84232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.380 [2024-11-19 00:06:03.316924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.380 [2024-11-19 00:06:03.316943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:84240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.380 [2024-11-19 00:06:03.316959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.380 [2024-11-19 00:06:03.316977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:84248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.380 [2024-11-19 00:06:03.316994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.380 [2024-11-19 00:06:03.317012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:84256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:03.380 [2024-11-19 00:06:03.317028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.380 [2024-11-19 00:06:03.317046] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002c180 is same with the state(6) to be set 00:22:03.380 [2024-11-19 00:06:03.317067] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:03.380 [2024-11-19 00:06:03.317082] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:03.380 [2024-11-19 00:06:03.317097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84264 len:8 PRP1 0x0 PRP2 0x0 00:22:03.380 [2024-11-19 00:06:03.317113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.380 [2024-11-19 00:06:03.317347] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.3:4422 to 10.0.0.3:4420 00:22:03.380 [2024-11-19 00:06:03.317430] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:03.380 [2024-11-19 00:06:03.317458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.380 [2024-11-19 00:06:03.317478] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:03.380 [2024-11-19 00:06:03.317494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.380 [2024-11-19 00:06:03.317511] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:03.380 [2024-11-19 00:06:03.317527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.380 [2024-11-19 00:06:03.317544] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:03.380 [2024-11-19 00:06:03.317560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:03.380 [2024-11-19 00:06:03.317576] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:22:03.380 [2024-11-19 00:06:03.317639] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:22:03.380 [2024-11-19 00:06:03.321352] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:22:03.380 [2024-11-19 00:06:03.349035] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:22:03.380 7842.20 IOPS, 30.63 MiB/s [2024-11-19T00:06:10.072Z] 7885.64 IOPS, 30.80 MiB/s [2024-11-19T00:06:10.072Z] 7921.83 IOPS, 30.94 MiB/s [2024-11-19T00:06:10.072Z] 7947.54 IOPS, 31.05 MiB/s [2024-11-19T00:06:10.072Z] 7970.71 IOPS, 31.14 MiB/s [2024-11-19T00:06:10.072Z] 7991.20 IOPS, 31.22 MiB/s 00:22:03.380 Latency(us) 00:22:03.380 [2024-11-19T00:06:10.072Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:03.380 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:03.380 Verification LBA range: start 0x0 length 0x4000 00:22:03.380 NVMe0n1 : 15.01 7991.92 31.22 239.11 0.00 15519.10 688.87 22401.40 00:22:03.380 [2024-11-19T00:06:10.072Z] =================================================================================================================== 00:22:03.380 [2024-11-19T00:06:10.072Z] Total : 7991.92 31.22 239.11 0.00 15519.10 688.87 22401.40 00:22:03.380 Received shutdown signal, test time was about 15.000000 seconds 00:22:03.380 00:22:03.380 Latency(us) 00:22:03.380 [2024-11-19T00:06:10.072Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:03.380 [2024-11-19T00:06:10.072Z] =================================================================================================================== 00:22:03.380 [2024-11-19T00:06:10.072Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:03.380 00:06:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:22:03.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:03.380 00:06:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:22:03.380 00:06:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:22:03.380 00:06:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=81448 00:22:03.380 00:06:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 81448 /var/tmp/bdevperf.sock 00:22:03.380 00:06:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 81448 ']' 00:22:03.380 00:06:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:03.380 00:06:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:22:03.380 00:06:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:03.380 00:06:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:03.380 00:06:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:03.380 00:06:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:04.316 00:06:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:04.316 00:06:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:22:04.316 00:06:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:22:04.575 [2024-11-19 00:06:11.185777] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:22:04.575 00:06:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:22:04.834 [2024-11-19 00:06:11.417929] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:22:04.834 00:06:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:05.093 NVMe0n1 00:22:05.093 00:06:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:05.352 00:22:05.352 00:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:05.920 00:22:05.920 00:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:05.920 00:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:22:06.178 00:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:06.178 00:06:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:22:09.519 00:06:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:09.519 00:06:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:22:09.519 00:06:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=81526 00:22:09.519 00:06:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 81526 00:22:09.519 00:06:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:10.895 { 00:22:10.895 "results": [ 00:22:10.895 { 00:22:10.895 "job": "NVMe0n1", 00:22:10.895 "core_mask": "0x1", 00:22:10.895 "workload": "verify", 00:22:10.895 "status": "finished", 00:22:10.895 "verify_range": { 00:22:10.895 "start": 0, 00:22:10.895 "length": 16384 00:22:10.895 }, 00:22:10.895 "queue_depth": 128, 00:22:10.895 "io_size": 4096, 00:22:10.895 "runtime": 1.00666, 00:22:10.895 "iops": 6265.273279955496, 00:22:10.895 "mibps": 24.473723749826156, 00:22:10.895 "io_failed": 0, 00:22:10.895 "io_timeout": 0, 00:22:10.895 "avg_latency_us": 20351.83948570852, 00:22:10.895 "min_latency_us": 1452.2181818181818, 00:22:10.895 "max_latency_us": 18230.923636363637 00:22:10.895 } 00:22:10.895 ], 00:22:10.895 "core_count": 1 00:22:10.895 } 00:22:10.895 00:06:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:10.895 [2024-11-19 00:06:10.019440] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:22:10.895 [2024-11-19 00:06:10.019665] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81448 ] 00:22:10.895 [2024-11-19 00:06:10.207713] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:10.895 [2024-11-19 00:06:10.332072] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:10.895 [2024-11-19 00:06:10.509099] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:10.895 [2024-11-19 00:06:12.838936] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:22:10.895 [2024-11-19 00:06:12.839093] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.895 [2024-11-19 00:06:12.839126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.895 [2024-11-19 00:06:12.839154] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.895 [2024-11-19 00:06:12.839173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.895 [2024-11-19 00:06:12.839191] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.895 [2024-11-19 00:06:12.839208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.895 [2024-11-19 00:06:12.839229] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.895 [2024-11-19 00:06:12.839246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.895 [2024-11-19 00:06:12.839271] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:22:10.895 [2024-11-19 00:06:12.839344] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:22:10.896 [2024-11-19 00:06:12.839391] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:22:10.896 [2024-11-19 00:06:12.843886] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:22:10.896 Running I/O for 1 seconds... 00:22:10.896 6179.00 IOPS, 24.14 MiB/s 00:22:10.896 Latency(us) 00:22:10.896 [2024-11-19T00:06:17.588Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:10.896 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:10.896 Verification LBA range: start 0x0 length 0x4000 00:22:10.896 NVMe0n1 : 1.01 6265.27 24.47 0.00 0.00 20351.84 1452.22 18230.92 00:22:10.896 [2024-11-19T00:06:17.588Z] =================================================================================================================== 00:22:10.896 [2024-11-19T00:06:17.588Z] Total : 6265.27 24.47 0.00 0.00 20351.84 1452.22 18230.92 00:22:10.896 00:06:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:10.896 00:06:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:22:11.154 00:06:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:11.424 00:06:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:22:11.424 00:06:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:11.683 00:06:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:11.940 00:06:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:22:15.225 00:06:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:15.225 00:06:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:22:15.225 00:06:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 81448 00:22:15.225 00:06:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 81448 ']' 00:22:15.225 00:06:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 81448 00:22:15.225 00:06:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:22:15.225 00:06:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:15.225 00:06:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81448 00:22:15.225 00:06:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:15.225 00:06:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:15.225 killing process with pid 81448 00:22:15.225 00:06:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81448' 00:22:15.225 00:06:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 81448 00:22:15.225 00:06:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 81448 00:22:16.162 00:06:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:22:16.162 00:06:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:16.421 00:06:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:22:16.421 00:06:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:16.421 00:06:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:22:16.421 00:06:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:16.421 00:06:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:22:16.421 00:06:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:16.421 00:06:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:22:16.421 00:06:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:16.421 00:06:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:16.421 rmmod nvme_tcp 00:22:16.421 rmmod nvme_fabrics 00:22:16.421 rmmod nvme_keyring 00:22:16.421 00:06:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:16.421 00:06:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:22:16.421 00:06:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:22:16.421 00:06:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 81187 ']' 00:22:16.421 00:06:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 81187 00:22:16.421 00:06:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 81187 ']' 00:22:16.421 00:06:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 81187 00:22:16.421 00:06:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:22:16.421 00:06:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:16.421 00:06:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81187 00:22:16.421 00:06:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:16.421 killing process with pid 81187 00:22:16.421 00:06:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:16.421 00:06:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81187' 00:22:16.421 00:06:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 81187 00:22:16.421 00:06:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 81187 00:22:17.358 00:06:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:17.358 00:06:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:17.358 00:06:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:17.358 00:06:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:22:17.358 00:06:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:22:17.358 00:06:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:17.358 00:06:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:22:17.358 00:06:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:17.358 00:06:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:22:17.358 00:06:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:22:17.358 00:06:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:22:17.358 00:06:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:22:17.358 00:06:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:22:17.358 00:06:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:22:17.358 00:06:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:22:17.358 00:06:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:22:17.358 00:06:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:22:17.358 00:06:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:22:17.617 00:06:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:22:17.617 00:06:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:22:17.617 00:06:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:17.617 00:06:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:17.617 00:06:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@246 -- # remove_spdk_ns 00:22:17.617 00:06:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:17.617 00:06:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:17.617 00:06:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:17.617 00:06:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@300 -- # return 0 00:22:17.617 00:22:17.617 real 0m35.530s 00:22:17.617 user 2m15.781s 00:22:17.617 sys 0m5.496s 00:22:17.617 00:06:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:17.617 00:06:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:17.617 ************************************ 00:22:17.617 END TEST nvmf_failover 00:22:17.617 ************************************ 00:22:17.617 00:06:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:22:17.617 00:06:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:17.617 00:06:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:17.617 00:06:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:17.617 ************************************ 00:22:17.617 START TEST nvmf_host_discovery 00:22:17.617 ************************************ 00:22:17.617 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:22:17.877 * Looking for test storage... 00:22:17.877 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:17.877 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:17.877 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:17.877 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:22:17.877 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:17.877 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:17.877 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:17.877 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:17.878 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:22:17.878 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:22:17.878 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:22:17.878 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:22:17.878 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:22:17.878 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:22:17.878 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:22:17.878 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:17.878 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:22:17.878 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:22:17.878 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:17.878 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:17.878 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:22:17.878 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:22:17.878 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:17.878 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:22:17.878 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:22:17.878 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:22:17.878 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:22:17.878 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:17.878 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:22:17.878 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:22:17.878 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:17.878 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:17.878 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:22:17.878 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:17.878 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:17.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:17.878 --rc genhtml_branch_coverage=1 00:22:17.878 --rc genhtml_function_coverage=1 00:22:17.878 --rc genhtml_legend=1 00:22:17.878 --rc geninfo_all_blocks=1 00:22:17.878 --rc geninfo_unexecuted_blocks=1 00:22:17.878 00:22:17.878 ' 00:22:17.878 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:17.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:17.878 --rc genhtml_branch_coverage=1 00:22:17.878 --rc genhtml_function_coverage=1 00:22:17.878 --rc genhtml_legend=1 00:22:17.878 --rc geninfo_all_blocks=1 00:22:17.878 --rc geninfo_unexecuted_blocks=1 00:22:17.878 00:22:17.878 ' 00:22:17.878 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:17.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:17.878 --rc genhtml_branch_coverage=1 00:22:17.878 --rc genhtml_function_coverage=1 00:22:17.878 --rc genhtml_legend=1 00:22:17.878 --rc geninfo_all_blocks=1 00:22:17.878 --rc geninfo_unexecuted_blocks=1 00:22:17.878 00:22:17.878 ' 00:22:17.878 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:17.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:17.878 --rc genhtml_branch_coverage=1 00:22:17.878 --rc genhtml_function_coverage=1 00:22:17.878 --rc genhtml_legend=1 00:22:17.878 --rc geninfo_all_blocks=1 00:22:17.878 --rc geninfo_unexecuted_blocks=1 00:22:17.878 00:22:17.878 ' 00:22:17.878 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:17.878 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:22:17.878 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:17.878 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:17.878 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:17.878 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:17.878 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:17.878 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:17.878 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:17.878 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:17.878 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:17.878 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:17.878 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:22:17.878 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:22:17.878 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:17.878 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:17.878 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:17.878 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:17.878 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:17.878 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:22:17.878 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:17.878 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:17.878 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:17.878 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:17.878 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:17.878 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:17.878 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:22:17.878 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:17.878 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:22:17.878 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:17.878 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:17.878 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:17.878 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:17.878 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:17.878 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:17.878 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:17.878 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:17.878 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:17.878 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:17.878 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:22:17.878 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:22:17.878 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:22:17.878 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:22:17.878 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:22:17.878 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:22:17.878 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:22:17.878 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:17.878 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:17.878 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:17.879 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:17.879 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:17.879 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:17.879 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:17.879 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:17.879 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:22:17.879 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:22:17.879 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:22:17.879 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:22:17.879 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:22:17.879 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@460 -- # nvmf_veth_init 00:22:17.879 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:17.879 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:22:17.879 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:22:17.879 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:22:17.879 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:17.879 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:22:17.879 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:17.879 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:22:17.879 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:17.879 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:22:17.879 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:17.879 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:17.879 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:17.879 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:17.879 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:17.879 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:17.879 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:22:17.879 Cannot find device "nvmf_init_br" 00:22:17.879 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:22:17.879 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:22:17.879 Cannot find device "nvmf_init_br2" 00:22:17.879 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:22:17.879 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:22:17.879 Cannot find device "nvmf_tgt_br" 00:22:17.879 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # true 00:22:17.879 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:22:17.879 Cannot find device "nvmf_tgt_br2" 00:22:17.879 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # true 00:22:17.879 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:22:17.879 Cannot find device "nvmf_init_br" 00:22:17.879 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # true 00:22:17.879 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:22:17.879 Cannot find device "nvmf_init_br2" 00:22:17.879 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # true 00:22:17.879 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:22:17.879 Cannot find device "nvmf_tgt_br" 00:22:17.879 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # true 00:22:17.879 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:22:17.879 Cannot find device "nvmf_tgt_br2" 00:22:17.879 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # true 00:22:17.879 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:22:18.138 Cannot find device "nvmf_br" 00:22:18.138 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # true 00:22:18.138 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:22:18.138 Cannot find device "nvmf_init_if" 00:22:18.138 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # true 00:22:18.138 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:22:18.138 Cannot find device "nvmf_init_if2" 00:22:18.138 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # true 00:22:18.138 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:18.138 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:18.138 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # true 00:22:18.138 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:18.138 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:18.138 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # true 00:22:18.138 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:22:18.138 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:18.138 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:22:18.138 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:18.138 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:18.138 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:18.138 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:18.138 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:18.138 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:22:18.138 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:22:18.138 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:22:18.138 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:22:18.138 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:22:18.138 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:22:18.138 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:22:18.139 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:22:18.139 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:22:18.139 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:18.139 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:18.139 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:18.139 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:22:18.139 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:22:18.139 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:22:18.139 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:22:18.139 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:18.139 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:18.139 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:18.139 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:22:18.139 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:22:18.139 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:22:18.139 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:18.139 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:22:18.139 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:22:18.139 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:18.139 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.124 ms 00:22:18.139 00:22:18.139 --- 10.0.0.3 ping statistics --- 00:22:18.139 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:18.139 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:22:18.139 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:22:18.139 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:22:18.139 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:22:18.139 00:22:18.139 --- 10.0.0.4 ping statistics --- 00:22:18.139 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:18.139 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:22:18.139 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:18.139 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:18.139 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:22:18.139 00:22:18.139 --- 10.0.0.1 ping statistics --- 00:22:18.139 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:18.139 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:22:18.139 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:22:18.139 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:18.139 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:22:18.139 00:22:18.139 --- 10.0.0.2 ping statistics --- 00:22:18.139 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:18.139 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:22:18.139 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:18.139 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@461 -- # return 0 00:22:18.139 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:18.139 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:18.139 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:18.139 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:18.139 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:18.139 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:18.139 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:18.398 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:22:18.398 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:18.398 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:18.398 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:18.398 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=81873 00:22:18.398 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 81873 00:22:18.398 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:18.398 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 81873 ']' 00:22:18.398 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:18.398 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:18.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:18.398 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:18.398 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:18.398 00:06:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:18.398 [2024-11-19 00:06:24.957318] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:22:18.398 [2024-11-19 00:06:24.957491] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:18.657 [2024-11-19 00:06:25.144746] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:18.657 [2024-11-19 00:06:25.269288] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:18.657 [2024-11-19 00:06:25.269366] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:18.657 [2024-11-19 00:06:25.269391] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:18.657 [2024-11-19 00:06:25.269422] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:18.657 [2024-11-19 00:06:25.269440] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:18.657 [2024-11-19 00:06:25.270871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:18.917 [2024-11-19 00:06:25.454365] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:19.484 00:06:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:19.484 00:06:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:22:19.484 00:06:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:19.484 00:06:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:19.484 00:06:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:19.485 00:06:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:19.485 00:06:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:19.485 00:06:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.485 00:06:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:19.485 [2024-11-19 00:06:25.913579] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:19.485 00:06:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.485 00:06:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:22:19.485 00:06:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.485 00:06:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:19.485 [2024-11-19 00:06:25.921862] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:22:19.485 00:06:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.485 00:06:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:22:19.485 00:06:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.485 00:06:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:19.485 null0 00:22:19.485 00:06:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.485 00:06:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:22:19.485 00:06:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.485 00:06:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:19.485 null1 00:22:19.485 00:06:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.485 00:06:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:22:19.485 00:06:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.485 00:06:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:19.485 00:06:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.485 00:06:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=81904 00:22:19.485 00:06:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:22:19.485 00:06:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 81904 /tmp/host.sock 00:22:19.485 00:06:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 81904 ']' 00:22:19.485 00:06:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:22:19.485 00:06:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:19.485 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:22:19.485 00:06:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:22:19.485 00:06:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:19.485 00:06:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:19.485 [2024-11-19 00:06:26.072358] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:22:19.485 [2024-11-19 00:06:26.072513] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81904 ] 00:22:19.744 [2024-11-19 00:06:26.259118] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:19.744 [2024-11-19 00:06:26.382844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:20.004 [2024-11-19 00:06:26.554771] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:20.571 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:20.571 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:22:20.571 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:20.571 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:22:20.571 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.571 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:20.571 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.571 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:22:20.571 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.572 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:20.572 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.572 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:22:20.572 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:22:20.572 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:20.572 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:20.572 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.572 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:20.572 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:20.572 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:20.572 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.572 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:22:20.572 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:22:20.572 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:20.572 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:20.572 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:20.572 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.572 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:20.572 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:20.572 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.572 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:22:20.572 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:22:20.572 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.572 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:20.572 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.572 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:22:20.572 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:20.572 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:20.572 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:20.572 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.572 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:20.572 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:20.572 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.572 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:22:20.572 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:22:20.572 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:20.572 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:20.572 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.572 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:20.572 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:20.572 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:20.572 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.831 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:22:20.831 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:22:20.831 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.831 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:20.831 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.831 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:22:20.831 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:20.831 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:20.831 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:20.831 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:20.831 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.831 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:20.831 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.831 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:22:20.831 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:22:20.831 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:20.831 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.831 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:20.831 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:20.831 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:20.831 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:20.831 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.831 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:22:20.831 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:22:20.831 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.831 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:20.831 [2024-11-19 00:06:27.398269] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:20.831 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.831 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:22:20.831 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:20.831 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:20.831 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:20.831 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:20.831 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.831 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:20.831 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.831 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:22:20.831 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:22:20.831 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:20.831 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:20.831 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:20.832 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.832 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:20.832 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:20.832 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.832 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:22:20.832 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:22:20.832 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:22:20.832 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:20.832 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:20.832 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:20.832 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:20.832 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:20.832 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:22:20.832 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:22:20.832 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:20.832 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.832 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:21.091 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.091 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:22:21.091 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:22:21.091 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:22:21.091 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:21.091 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:22:21.091 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.091 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:21.091 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.091 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:21.091 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:21.091 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:21.091 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:21.091 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:21.091 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:22:21.091 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:21.091 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.091 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:21.091 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:21.091 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:21.091 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:21.091 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.091 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:22:21.091 00:06:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:22:21.658 [2024-11-19 00:06:28.048002] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:22:21.658 [2024-11-19 00:06:28.048057] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:22:21.658 [2024-11-19 00:06:28.048092] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:22:21.658 [2024-11-19 00:06:28.054085] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:22:21.658 [2024-11-19 00:06:28.116730] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:22:21.658 [2024-11-19 00:06:28.118143] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x61500002b280:1 started. 00:22:21.658 [2024-11-19 00:06:28.120215] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:22:21.658 [2024-11-19 00:06:28.120291] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:22:21.658 [2024-11-19 00:06:28.126748] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x61500002b280 was disconnected and freed. delete nvme_qpair. 00:22:22.227 00:06:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:22.227 00:06:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:22.227 00:06:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:22:22.227 00:06:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:22.227 00:06:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:22.227 00:06:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.227 00:06:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:22.227 00:06:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:22.227 00:06:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:22.227 00:06:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.227 00:06:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:22.227 00:06:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:22.227 00:06:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:22:22.227 00:06:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:22:22.227 00:06:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:22.227 00:06:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:22.227 00:06:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:22:22.227 00:06:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:22:22.227 00:06:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:22.227 00:06:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:22.227 00:06:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.227 00:06:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:22.227 00:06:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:22.227 00:06:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:22.227 00:06:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.227 00:06:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:22:22.227 00:06:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:22.227 00:06:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:22:22.227 00:06:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:22:22.227 00:06:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:22.227 00:06:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:22.227 00:06:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:22:22.227 00:06:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:22:22.227 00:06:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:22.227 00:06:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.227 00:06:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:22.227 00:06:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:22.227 00:06:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:22.227 00:06:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:22.227 00:06:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.227 00:06:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:22:22.227 00:06:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:22.227 00:06:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:22:22.227 00:06:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:22:22.227 00:06:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:22.227 00:06:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:22.227 00:06:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:22.227 00:06:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:22.227 00:06:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:22.227 00:06:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:22:22.227 00:06:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:22:22.227 00:06:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.227 00:06:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:22.227 00:06:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:22.227 00:06:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.227 00:06:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:22:22.227 00:06:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:22:22.227 00:06:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:22:22.227 00:06:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:22.227 00:06:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:22:22.227 00:06:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.227 00:06:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:22.227 00:06:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.228 [2024-11-19 00:06:28.869640] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x61500002b500:1 started. 00:22:22.228 00:06:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:22.228 00:06:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:22.228 00:06:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:22.228 00:06:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:22.228 00:06:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:22:22.228 00:06:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:22:22.228 00:06:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:22.228 00:06:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.228 00:06:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:22.228 00:06:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:22.228 00:06:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:22.228 00:06:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:22.228 [2024-11-19 00:06:28.877554] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x61500002b500 was disconnected and freed. delete nvme_qpair. 00:22:22.228 00:06:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.487 00:06:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:22.487 00:06:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:22.487 00:06:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:22:22.487 00:06:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:22:22.487 00:06:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:22.487 00:06:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:22.487 00:06:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:22.487 00:06:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:22.487 00:06:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:22.487 00:06:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:22:22.487 00:06:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:22:22.487 00:06:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:22.487 00:06:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.487 00:06:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:22.487 00:06:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.487 00:06:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:22:22.487 00:06:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:22:22.487 00:06:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:22:22.487 00:06:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:22.487 00:06:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 00:22:22.487 00:06:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.487 00:06:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:22.487 [2024-11-19 00:06:28.985332] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:22:22.487 [2024-11-19 00:06:28.986365] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:22:22.487 [2024-11-19 00:06:28.986437] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:22:22.487 00:06:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.487 00:06:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:22.487 00:06:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:22.487 00:06:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:22.487 00:06:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:22.487 00:06:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:22.487 [2024-11-19 00:06:28.992411] bdev_nvme.c:7308:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new path for nvme0 00:22:22.487 00:06:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:22:22.487 00:06:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:22.487 00:06:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.487 00:06:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:22.487 00:06:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:22.487 00:06:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:22.487 00:06:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:22.487 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.487 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:22.487 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:22.487 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:22.487 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:22.487 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:22.487 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:22.487 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:22:22.487 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:22:22.487 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:22.487 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:22.487 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.487 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:22.487 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:22.487 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:22.487 [2024-11-19 00:06:29.051102] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4421 00:22:22.487 [2024-11-19 00:06:29.051187] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:22:22.487 [2024-11-19 00:06:29.051205] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:22:22.487 [2024-11-19 00:06:29.051215] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:22:22.487 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.487 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:22.487 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:22.488 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:22:22.488 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:22:22.488 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:22.488 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:22.488 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:22:22.488 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:22:22.488 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:22.488 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:22.488 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.488 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:22.488 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:22.488 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:22.488 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.488 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:22:22.488 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:22.488 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:22:22.488 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:22:22.488 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:22.488 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:22.488 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:22.488 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:22.488 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:22.488 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:22:22.488 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:22.488 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.488 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:22.488 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:22.488 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.746 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:22:22.746 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:22:22.746 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:22:22.746 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:22.746 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:22:22.746 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.746 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:22.746 [2024-11-19 00:06:29.206419] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:22:22.746 [2024-11-19 00:06:29.206465] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:22:22.746 [2024-11-19 00:06:29.207274] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:22.746 [2024-11-19 00:06:29.207349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.746 [2024-11-19 00:06:29.207368] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:22.746 [2024-11-19 00:06:29.207381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.746 [2024-11-19 00:06:29.207393] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:22.746 [2024-11-19 00:06:29.207404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.746 [2024-11-19 00:06:29.207427] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:22.746 [2024-11-19 00:06:29.207438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.746 [2024-11-19 00:06:29.207449] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ad80 is same with the state(6) to be set 00:22:22.746 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.746 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:22.746 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:22.746 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:22.746 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:22.746 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:22.746 [2024-11-19 00:06:29.212604] bdev_nvme.c:7171:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 not found 00:22:22.746 [2024-11-19 00:06:29.212690] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:22:22.746 [2024-11-19 00:06:29.212788] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:22:22.746 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:22:22.746 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:22.746 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:22.746 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:22.746 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.746 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:22.746 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:22.746 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.746 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:22.746 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:22.746 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:22.746 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:22.746 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:22.746 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:22.746 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:22:22.746 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:22:22.746 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:22.746 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:22.746 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.746 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:22.746 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:22.746 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:22.746 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.746 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:22.746 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:22.746 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:22:22.746 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:22:22.746 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:22.746 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:22.746 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:22:22.746 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:22:22.746 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:22.746 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:22.746 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.746 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:22.746 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:22.746 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:22.746 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.746 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:22:22.746 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:22.746 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:22:22.746 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:22:22.746 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:22.746 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:22.746 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:22.746 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:22.746 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:22.747 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:22:22.747 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:22.747 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:22.747 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.747 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:22.747 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.747 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:22:22.747 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:22:22.747 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:22:22.747 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:22.747 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:22:22.747 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.747 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:22.747 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.747 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:22:22.747 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:22:22.747 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:22.747 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:23.006 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:22:23.006 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:22:23.006 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:23.006 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:23.006 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.006 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:23.006 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:23.006 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:23.006 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.006 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:22:23.006 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:23.006 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:22:23.006 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:22:23.006 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:23.006 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:23.006 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:22:23.006 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:22:23.006 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:23.006 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:23.006 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.006 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:23.006 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:23.006 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:23.006 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.006 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:22:23.006 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:23.006 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:22:23.006 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:22:23.006 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:23.006 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:23.006 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:23.006 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:23.006 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:23.006 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:22:23.006 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:23.006 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:23.006 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.006 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:23.006 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.006 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:22:23.006 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:22:23.006 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:22:23.006 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:23.006 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:23.006 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.006 00:06:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:23.942 [2024-11-19 00:06:30.628824] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:22:23.942 [2024-11-19 00:06:30.628880] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:22:23.942 [2024-11-19 00:06:30.628918] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:22:24.202 [2024-11-19 00:06:30.634891] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new subsystem nvme0 00:22:24.202 [2024-11-19 00:06:30.693461] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.3:4421 00:22:24.202 [2024-11-19 00:06:30.694655] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x61500002c680:1 started. 00:22:24.202 [2024-11-19 00:06:30.697226] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:22:24.202 [2024-11-19 00:06:30.697300] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:22:24.202 00:06:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.202 00:06:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:24.202 00:06:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:22:24.202 [2024-11-19 00:06:30.699479] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x61500002c680 was disconnected and freed. delete nvme_qpair. 00:22:24.202 00:06:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:24.202 00:06:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:24.202 00:06:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:24.202 00:06:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:24.202 00:06:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:24.202 00:06:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:24.202 00:06:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.202 00:06:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:24.202 request: 00:22:24.202 { 00:22:24.202 "name": "nvme", 00:22:24.202 "trtype": "tcp", 00:22:24.202 "traddr": "10.0.0.3", 00:22:24.202 "adrfam": "ipv4", 00:22:24.202 "trsvcid": "8009", 00:22:24.202 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:24.202 "wait_for_attach": true, 00:22:24.202 "method": "bdev_nvme_start_discovery", 00:22:24.202 "req_id": 1 00:22:24.202 } 00:22:24.202 Got JSON-RPC error response 00:22:24.202 response: 00:22:24.202 { 00:22:24.202 "code": -17, 00:22:24.202 "message": "File exists" 00:22:24.202 } 00:22:24.202 00:06:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:24.202 00:06:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:22:24.202 00:06:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:24.202 00:06:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:24.202 00:06:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:24.202 00:06:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:22:24.202 00:06:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:24.202 00:06:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:24.202 00:06:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.202 00:06:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:22:24.202 00:06:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:24.202 00:06:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:22:24.202 00:06:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.202 00:06:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:22:24.202 00:06:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:22:24.202 00:06:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:24.202 00:06:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.202 00:06:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:24.202 00:06:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:24.202 00:06:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:24.202 00:06:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:24.202 00:06:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.202 00:06:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:24.202 00:06:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:24.202 00:06:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:22:24.202 00:06:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:24.202 00:06:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:24.202 00:06:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:24.202 00:06:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:24.202 00:06:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:24.202 00:06:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:24.202 00:06:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.202 00:06:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:24.202 request: 00:22:24.202 { 00:22:24.202 "name": "nvme_second", 00:22:24.202 "trtype": "tcp", 00:22:24.202 "traddr": "10.0.0.3", 00:22:24.202 "adrfam": "ipv4", 00:22:24.202 "trsvcid": "8009", 00:22:24.202 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:24.202 "wait_for_attach": true, 00:22:24.202 "method": "bdev_nvme_start_discovery", 00:22:24.202 "req_id": 1 00:22:24.202 } 00:22:24.202 Got JSON-RPC error response 00:22:24.202 response: 00:22:24.202 { 00:22:24.202 "code": -17, 00:22:24.202 "message": "File exists" 00:22:24.202 } 00:22:24.202 00:06:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:24.202 00:06:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:22:24.202 00:06:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:24.202 00:06:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:24.202 00:06:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:24.202 00:06:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:22:24.202 00:06:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:24.202 00:06:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:24.202 00:06:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.202 00:06:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:24.202 00:06:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:22:24.202 00:06:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:22:24.202 00:06:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.461 00:06:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:22:24.462 00:06:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:22:24.462 00:06:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:24.462 00:06:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.462 00:06:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:24.462 00:06:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:24.462 00:06:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:24.462 00:06:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:24.462 00:06:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.462 00:06:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:24.462 00:06:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:24.462 00:06:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:22:24.462 00:06:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:24.462 00:06:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:24.462 00:06:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:24.462 00:06:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:24.462 00:06:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:24.462 00:06:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:24.462 00:06:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.462 00:06:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:25.398 [2024-11-19 00:06:31.961771] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.398 [2024-11-19 00:06:31.961848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002c900 with addr=10.0.0.3, port=8010 00:22:25.398 [2024-11-19 00:06:31.961901] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:22:25.398 [2024-11-19 00:06:31.961916] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:22:25.398 [2024-11-19 00:06:31.961929] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:22:26.334 [2024-11-19 00:06:32.961753] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.334 [2024-11-19 00:06:32.961838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002cb80 with addr=10.0.0.3, port=8010 00:22:26.334 [2024-11-19 00:06:32.961889] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:22:26.334 [2024-11-19 00:06:32.961903] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:22:26.334 [2024-11-19 00:06:32.961915] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:22:27.711 [2024-11-19 00:06:33.961561] bdev_nvme.c:7427:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] timed out while attaching discovery ctrlr 00:22:27.711 request: 00:22:27.711 { 00:22:27.711 "name": "nvme_second", 00:22:27.711 "trtype": "tcp", 00:22:27.711 "traddr": "10.0.0.3", 00:22:27.711 "adrfam": "ipv4", 00:22:27.711 "trsvcid": "8010", 00:22:27.711 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:27.711 "wait_for_attach": false, 00:22:27.711 "attach_timeout_ms": 3000, 00:22:27.711 "method": "bdev_nvme_start_discovery", 00:22:27.711 "req_id": 1 00:22:27.711 } 00:22:27.711 Got JSON-RPC error response 00:22:27.711 response: 00:22:27.711 { 00:22:27.711 "code": -110, 00:22:27.711 "message": "Connection timed out" 00:22:27.711 } 00:22:27.711 00:06:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:27.711 00:06:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:22:27.711 00:06:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:27.711 00:06:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:27.711 00:06:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:27.711 00:06:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:22:27.711 00:06:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:27.711 00:06:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.711 00:06:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:27.711 00:06:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:27.711 00:06:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:22:27.711 00:06:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:22:27.711 00:06:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.711 00:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:22:27.711 00:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:22:27.711 00:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 81904 00:22:27.711 00:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:22:27.711 00:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:27.711 00:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:22:27.711 00:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:27.711 00:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:22:27.711 00:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:27.711 00:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:27.711 rmmod nvme_tcp 00:22:27.711 rmmod nvme_fabrics 00:22:27.711 rmmod nvme_keyring 00:22:27.711 00:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:27.711 00:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:22:27.711 00:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:22:27.711 00:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 81873 ']' 00:22:27.711 00:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 81873 00:22:27.711 00:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 81873 ']' 00:22:27.711 00:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 81873 00:22:27.711 00:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:22:27.711 00:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:27.711 00:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81873 00:22:27.711 00:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:27.711 00:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:27.711 killing process with pid 81873 00:22:27.711 00:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81873' 00:22:27.711 00:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 81873 00:22:27.711 00:06:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 81873 00:22:28.647 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:28.647 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:28.647 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:28.647 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:22:28.647 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:22:28.647 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:22:28.647 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:28.647 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:28.647 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:22:28.647 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:22:28.647 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:22:28.647 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:22:28.647 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:22:28.647 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:22:28.647 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:22:28.647 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:22:28.647 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:22:28.647 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:22:28.647 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:22:28.647 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:22:28.647 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:28.647 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:28.647 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:22:28.647 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:28.647 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:28.647 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:28.647 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@300 -- # return 0 00:22:28.647 00:22:28.647 real 0m10.996s 00:22:28.647 user 0m20.806s 00:22:28.647 sys 0m2.021s 00:22:28.647 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:28.647 ************************************ 00:22:28.647 END TEST nvmf_host_discovery 00:22:28.647 ************************************ 00:22:28.647 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:28.647 00:06:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:22:28.647 00:06:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:28.647 00:06:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:28.647 00:06:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:28.647 ************************************ 00:22:28.647 START TEST nvmf_host_multipath_status 00:22:28.647 ************************************ 00:22:28.647 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:22:28.907 * Looking for test storage... 00:22:28.907 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:28.907 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:28.907 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:28.907 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:22:28.907 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:28.907 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:28.907 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:28.907 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:28.907 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:22:28.907 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:22:28.907 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:22:28.907 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:22:28.907 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:22:28.907 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:22:28.907 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:22:28.907 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:28.907 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:22:28.907 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:22:28.907 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:28.907 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:28.907 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:22:28.907 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:22:28.907 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:28.907 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:22:28.907 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:22:28.907 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:22:28.907 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:22:28.907 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:28.907 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:22:28.907 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:22:28.907 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:28.907 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:28.907 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:22:28.907 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:28.907 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:28.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:28.907 --rc genhtml_branch_coverage=1 00:22:28.907 --rc genhtml_function_coverage=1 00:22:28.907 --rc genhtml_legend=1 00:22:28.907 --rc geninfo_all_blocks=1 00:22:28.907 --rc geninfo_unexecuted_blocks=1 00:22:28.907 00:22:28.907 ' 00:22:28.907 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:28.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:28.907 --rc genhtml_branch_coverage=1 00:22:28.907 --rc genhtml_function_coverage=1 00:22:28.907 --rc genhtml_legend=1 00:22:28.907 --rc geninfo_all_blocks=1 00:22:28.907 --rc geninfo_unexecuted_blocks=1 00:22:28.907 00:22:28.907 ' 00:22:28.907 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:28.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:28.907 --rc genhtml_branch_coverage=1 00:22:28.907 --rc genhtml_function_coverage=1 00:22:28.907 --rc genhtml_legend=1 00:22:28.907 --rc geninfo_all_blocks=1 00:22:28.907 --rc geninfo_unexecuted_blocks=1 00:22:28.907 00:22:28.907 ' 00:22:28.907 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:28.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:28.907 --rc genhtml_branch_coverage=1 00:22:28.907 --rc genhtml_function_coverage=1 00:22:28.907 --rc genhtml_legend=1 00:22:28.907 --rc geninfo_all_blocks=1 00:22:28.907 --rc geninfo_unexecuted_blocks=1 00:22:28.907 00:22:28.907 ' 00:22:28.907 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:28.907 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:22:28.907 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:28.907 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:28.907 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:28.907 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:28.907 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:28.907 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:28.907 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:28.907 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:28.907 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:28.907 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:28.907 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:22:28.907 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:22:28.907 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:28.907 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:28.908 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:28.908 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:28.908 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:28.908 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:22:28.908 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:28.908 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:28.908 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:28.908 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:28.908 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:28.908 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:28.908 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:22:28.908 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:28.908 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:22:28.908 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:28.908 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:28.908 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:28.908 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:28.908 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:28.908 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:28.908 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:28.908 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:28.908 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:28.908 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:28.908 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:28.908 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:28.908 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:28.908 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:22:28.908 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:28.908 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:22:28.908 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:22:28.908 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:28.908 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:28.908 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:28.908 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:28.908 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:28.908 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:28.908 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:28.908 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:28.908 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:22:28.908 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:22:28.908 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:22:28.908 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:22:28.908 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:22:28.908 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@460 -- # nvmf_veth_init 00:22:28.908 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:28.908 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:22:28.908 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:22:28.908 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:22:28.908 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:28.908 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:22:28.908 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:28.908 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:22:28.908 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:28.908 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:22:28.908 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:28.908 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:28.908 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:28.908 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:28.908 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:28.908 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:28.908 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:22:28.908 Cannot find device "nvmf_init_br" 00:22:28.908 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:22:28.908 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:22:28.908 Cannot find device "nvmf_init_br2" 00:22:28.908 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:22:28.908 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:22:28.908 Cannot find device "nvmf_tgt_br" 00:22:28.908 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # true 00:22:28.908 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:22:28.908 Cannot find device "nvmf_tgt_br2" 00:22:28.908 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # true 00:22:28.908 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:22:28.908 Cannot find device "nvmf_init_br" 00:22:28.908 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # true 00:22:28.908 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:22:28.908 Cannot find device "nvmf_init_br2" 00:22:28.908 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # true 00:22:28.908 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:22:29.167 Cannot find device "nvmf_tgt_br" 00:22:29.167 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # true 00:22:29.167 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:22:29.167 Cannot find device "nvmf_tgt_br2" 00:22:29.167 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # true 00:22:29.167 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:22:29.167 Cannot find device "nvmf_br" 00:22:29.167 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # true 00:22:29.167 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:22:29.167 Cannot find device "nvmf_init_if" 00:22:29.167 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # true 00:22:29.167 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:22:29.167 Cannot find device "nvmf_init_if2" 00:22:29.167 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # true 00:22:29.167 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:29.167 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:29.167 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # true 00:22:29.167 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:29.167 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:29.167 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # true 00:22:29.167 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:22:29.167 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:29.167 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:22:29.167 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:29.167 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:29.167 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:29.167 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:29.167 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:29.167 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:22:29.167 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:22:29.167 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:22:29.167 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:22:29.167 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:22:29.167 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:22:29.167 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:22:29.167 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:22:29.167 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:22:29.167 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:29.167 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:29.167 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:29.167 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:22:29.167 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:22:29.167 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:22:29.167 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:22:29.426 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:29.426 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:29.426 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:29.426 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:22:29.426 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:22:29.426 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:22:29.426 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:29.426 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:22:29.426 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:22:29.426 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:29.426 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.079 ms 00:22:29.426 00:22:29.426 --- 10.0.0.3 ping statistics --- 00:22:29.426 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:29.426 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:22:29.426 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:22:29.426 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:22:29.426 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.069 ms 00:22:29.426 00:22:29.426 --- 10.0.0.4 ping statistics --- 00:22:29.426 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:29.426 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:22:29.426 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:29.426 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:29.426 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:22:29.426 00:22:29.426 --- 10.0.0.1 ping statistics --- 00:22:29.426 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:29.426 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:22:29.426 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:22:29.426 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:29.426 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.051 ms 00:22:29.426 00:22:29.426 --- 10.0.0.2 ping statistics --- 00:22:29.426 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:29.426 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:22:29.426 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:29.426 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@461 -- # return 0 00:22:29.426 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:29.426 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:29.426 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:29.426 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:29.426 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:29.426 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:29.426 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:29.426 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:22:29.426 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:29.426 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:29.426 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:29.426 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=82417 00:22:29.426 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:22:29.426 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 82417 00:22:29.426 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 82417 ']' 00:22:29.426 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:29.426 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:29.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:29.426 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:29.426 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:29.426 00:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:29.426 [2024-11-19 00:06:36.055358] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:22:29.426 [2024-11-19 00:06:36.055548] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:29.685 [2024-11-19 00:06:36.236254] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:29.685 [2024-11-19 00:06:36.316067] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:29.685 [2024-11-19 00:06:36.316127] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:29.685 [2024-11-19 00:06:36.316159] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:29.685 [2024-11-19 00:06:36.316181] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:29.685 [2024-11-19 00:06:36.316194] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:29.685 [2024-11-19 00:06:36.317864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:29.685 [2024-11-19 00:06:36.317885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:29.944 [2024-11-19 00:06:36.467490] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:30.523 00:06:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:30.523 00:06:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:22:30.523 00:06:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:30.523 00:06:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:30.523 00:06:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:30.523 00:06:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:30.523 00:06:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=82417 00:22:30.523 00:06:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:30.782 [2024-11-19 00:06:37.251577] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:30.782 00:06:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:31.041 Malloc0 00:22:31.041 00:06:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:22:31.300 00:06:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:31.559 00:06:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:31.818 [2024-11-19 00:06:38.264885] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:31.818 00:06:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:22:32.077 [2024-11-19 00:06:38.540957] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:22:32.077 00:06:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=82468 00:22:32.078 00:06:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:22:32.078 00:06:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:32.078 00:06:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 82468 /var/tmp/bdevperf.sock 00:22:32.078 00:06:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 82468 ']' 00:22:32.078 00:06:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:32.078 00:06:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:32.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:32.078 00:06:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:32.078 00:06:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:32.078 00:06:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:33.014 00:06:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:33.014 00:06:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:22:33.014 00:06:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:22:33.272 00:06:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:22:33.531 Nvme0n1 00:22:33.531 00:06:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:22:33.789 Nvme0n1 00:22:33.789 00:06:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:22:33.789 00:06:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:22:35.728 00:06:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:22:35.729 00:06:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:22:35.987 00:06:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:22:36.556 00:06:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:22:37.494 00:06:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:22:37.494 00:06:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:37.494 00:06:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:37.494 00:06:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:37.754 00:06:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:37.754 00:06:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:37.754 00:06:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:37.754 00:06:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:38.013 00:06:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:38.013 00:06:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:38.013 00:06:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:38.013 00:06:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:38.272 00:06:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:38.272 00:06:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:38.272 00:06:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:38.272 00:06:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:38.531 00:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:38.531 00:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:38.531 00:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:38.531 00:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:38.790 00:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:38.790 00:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:38.790 00:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:38.790 00:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:39.050 00:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:39.050 00:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:22:39.050 00:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:22:39.309 00:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:22:39.569 00:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:22:40.506 00:06:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:22:40.506 00:06:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:40.506 00:06:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:40.506 00:06:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:40.765 00:06:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:40.765 00:06:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:40.765 00:06:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:40.765 00:06:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:41.024 00:06:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:41.024 00:06:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:41.024 00:06:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:41.024 00:06:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:41.284 00:06:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:41.284 00:06:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:41.284 00:06:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:41.284 00:06:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:41.543 00:06:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:41.543 00:06:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:41.543 00:06:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:41.543 00:06:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:41.802 00:06:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:41.802 00:06:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:41.802 00:06:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:41.802 00:06:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:42.061 00:06:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:42.061 00:06:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:22:42.061 00:06:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:22:42.320 00:06:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:22:42.580 00:06:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:22:43.518 00:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:22:43.518 00:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:43.518 00:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:43.518 00:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:44.087 00:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:44.087 00:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:44.087 00:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:44.087 00:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:44.087 00:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:44.087 00:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:44.087 00:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:44.087 00:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:44.347 00:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:44.347 00:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:44.347 00:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:44.347 00:06:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:44.607 00:06:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:44.607 00:06:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:44.607 00:06:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:44.607 00:06:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:44.866 00:06:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:44.866 00:06:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:44.866 00:06:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:44.866 00:06:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:45.126 00:06:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:45.126 00:06:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:22:45.126 00:06:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:22:45.385 00:06:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:22:45.643 00:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:22:46.580 00:06:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:22:46.580 00:06:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:46.580 00:06:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:46.580 00:06:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:46.839 00:06:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:46.839 00:06:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:46.839 00:06:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:46.839 00:06:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:47.407 00:06:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:47.407 00:06:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:47.407 00:06:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:47.407 00:06:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:47.407 00:06:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:47.407 00:06:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:47.667 00:06:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:47.667 00:06:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:47.926 00:06:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:47.926 00:06:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:47.926 00:06:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:47.926 00:06:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:48.185 00:06:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:48.186 00:06:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:22:48.186 00:06:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:48.186 00:06:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:48.445 00:06:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:48.445 00:06:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:22:48.445 00:06:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:22:48.704 00:06:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:22:48.964 00:06:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:22:49.901 00:06:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:22:49.901 00:06:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:49.901 00:06:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:49.901 00:06:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:50.161 00:06:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:50.161 00:06:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:50.161 00:06:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:50.161 00:06:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:50.421 00:06:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:50.421 00:06:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:50.421 00:06:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:50.421 00:06:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:50.680 00:06:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:50.680 00:06:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:50.680 00:06:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:50.680 00:06:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:50.940 00:06:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:50.940 00:06:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:22:50.940 00:06:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:50.940 00:06:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:51.200 00:06:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:51.200 00:06:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:22:51.200 00:06:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:51.200 00:06:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:51.459 00:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:51.459 00:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:22:51.459 00:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:22:51.718 00:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:22:51.977 00:06:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:22:52.914 00:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:22:52.914 00:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:52.914 00:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:52.914 00:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:53.172 00:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:53.172 00:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:53.172 00:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:53.172 00:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:53.431 00:07:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:53.431 00:07:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:53.431 00:07:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:53.431 00:07:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:53.690 00:07:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:53.690 00:07:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:53.690 00:07:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:53.690 00:07:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:53.949 00:07:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:53.949 00:07:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:22:53.949 00:07:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:53.949 00:07:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:54.209 00:07:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:54.209 00:07:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:54.209 00:07:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:54.209 00:07:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:54.468 00:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:54.468 00:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:22:54.727 00:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:22:54.727 00:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:22:54.986 00:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:22:55.245 00:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:22:56.182 00:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:22:56.182 00:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:56.182 00:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:56.182 00:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:56.442 00:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:56.442 00:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:56.442 00:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:56.443 00:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:56.702 00:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:56.702 00:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:56.702 00:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:56.702 00:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:56.961 00:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:56.961 00:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:56.961 00:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:56.961 00:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:57.221 00:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:57.221 00:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:57.221 00:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:57.221 00:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:57.480 00:07:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:57.480 00:07:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:57.480 00:07:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:57.480 00:07:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:57.740 00:07:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:57.740 00:07:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:22:57.740 00:07:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:22:57.999 00:07:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:22:58.259 00:07:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:22:59.196 00:07:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:22:59.196 00:07:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:59.196 00:07:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:59.196 00:07:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:59.764 00:07:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:59.764 00:07:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:59.764 00:07:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:59.764 00:07:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:59.764 00:07:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:59.764 00:07:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:59.764 00:07:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:59.764 00:07:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:00.036 00:07:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:00.036 00:07:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:00.036 00:07:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:00.036 00:07:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:00.308 00:07:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:00.308 00:07:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:00.308 00:07:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:00.308 00:07:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:00.567 00:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:00.567 00:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:00.567 00:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:00.567 00:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:00.825 00:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:00.825 00:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:23:00.825 00:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:23:01.084 00:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:23:01.342 00:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:23:02.275 00:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:23:02.275 00:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:02.275 00:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:02.275 00:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:02.843 00:07:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:02.843 00:07:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:02.843 00:07:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:02.843 00:07:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:02.843 00:07:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:02.843 00:07:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:02.843 00:07:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:02.843 00:07:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:03.103 00:07:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:03.103 00:07:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:03.103 00:07:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:03.103 00:07:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:03.362 00:07:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:03.362 00:07:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:03.621 00:07:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:03.621 00:07:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:03.621 00:07:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:03.621 00:07:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:03.621 00:07:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:03.621 00:07:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:04.189 00:07:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:04.189 00:07:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:23:04.189 00:07:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:23:04.189 00:07:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:23:04.449 00:07:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:23:05.829 00:07:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:23:05.829 00:07:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:05.829 00:07:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:05.829 00:07:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:05.829 00:07:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:05.829 00:07:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:05.829 00:07:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:05.829 00:07:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:06.089 00:07:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:06.089 00:07:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:06.089 00:07:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:06.089 00:07:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:06.347 00:07:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:06.348 00:07:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:06.348 00:07:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:06.348 00:07:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:06.606 00:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:06.606 00:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:06.606 00:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:06.606 00:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:06.865 00:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:06.865 00:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:06.865 00:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:06.865 00:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:07.125 00:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:07.125 00:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 82468 00:23:07.125 00:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 82468 ']' 00:23:07.125 00:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 82468 00:23:07.125 00:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:23:07.125 00:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:07.125 00:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82468 00:23:07.125 killing process with pid 82468 00:23:07.125 00:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:07.125 00:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:07.125 00:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82468' 00:23:07.125 00:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 82468 00:23:07.125 00:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 82468 00:23:07.125 { 00:23:07.125 "results": [ 00:23:07.125 { 00:23:07.125 "job": "Nvme0n1", 00:23:07.125 "core_mask": "0x4", 00:23:07.125 "workload": "verify", 00:23:07.125 "status": "terminated", 00:23:07.125 "verify_range": { 00:23:07.125 "start": 0, 00:23:07.125 "length": 16384 00:23:07.125 }, 00:23:07.125 "queue_depth": 128, 00:23:07.125 "io_size": 4096, 00:23:07.125 "runtime": 33.278392, 00:23:07.125 "iops": 7910.478366863399, 00:23:07.125 "mibps": 30.900306120560153, 00:23:07.125 "io_failed": 0, 00:23:07.125 "io_timeout": 0, 00:23:07.125 "avg_latency_us": 16149.490898924209, 00:23:07.125 "min_latency_us": 1303.2727272727273, 00:23:07.125 "max_latency_us": 4026531.84 00:23:07.125 } 00:23:07.125 ], 00:23:07.125 "core_count": 1 00:23:07.125 } 00:23:08.066 00:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 82468 00:23:08.066 00:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:23:08.066 [2024-11-19 00:06:38.644076] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:23:08.066 [2024-11-19 00:06:38.644244] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82468 ] 00:23:08.066 [2024-11-19 00:06:38.807255] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:08.066 [2024-11-19 00:06:38.892369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:08.067 [2024-11-19 00:06:39.048103] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:08.067 Running I/O for 90 seconds... 00:23:08.067 7206.00 IOPS, 28.15 MiB/s [2024-11-19T00:07:14.759Z] 7861.00 IOPS, 30.71 MiB/s [2024-11-19T00:07:14.759Z] 8046.00 IOPS, 31.43 MiB/s [2024-11-19T00:07:14.759Z] 8126.50 IOPS, 31.74 MiB/s [2024-11-19T00:07:14.759Z] 8162.00 IOPS, 31.88 MiB/s [2024-11-19T00:07:14.759Z] 8200.67 IOPS, 32.03 MiB/s [2024-11-19T00:07:14.759Z] 8237.86 IOPS, 32.18 MiB/s [2024-11-19T00:07:14.759Z] 8239.62 IOPS, 32.19 MiB/s [2024-11-19T00:07:14.759Z] 8255.33 IOPS, 32.25 MiB/s [2024-11-19T00:07:14.759Z] 8273.70 IOPS, 32.32 MiB/s [2024-11-19T00:07:14.759Z] 8274.91 IOPS, 32.32 MiB/s [2024-11-19T00:07:14.759Z] 8276.75 IOPS, 32.33 MiB/s [2024-11-19T00:07:14.759Z] 8293.62 IOPS, 32.40 MiB/s [2024-11-19T00:07:14.759Z] 8288.29 IOPS, 32.38 MiB/s [2024-11-19T00:07:14.759Z] [2024-11-19 00:06:55.214357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:60960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.067 [2024-11-19 00:06:55.214463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.067 [2024-11-19 00:06:55.214540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:60968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.067 [2024-11-19 00:06:55.214567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.067 [2024-11-19 00:06:55.214597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:60976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.067 [2024-11-19 00:06:55.214631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:08.067 [2024-11-19 00:06:55.214661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:60984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.067 [2024-11-19 00:06:55.214680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:08.067 [2024-11-19 00:06:55.214707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:60992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.067 [2024-11-19 00:06:55.214726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:08.067 [2024-11-19 00:06:55.214753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:61000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.067 [2024-11-19 00:06:55.214772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:08.067 [2024-11-19 00:06:55.214798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:61008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.067 [2024-11-19 00:06:55.214817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:08.067 [2024-11-19 00:06:55.214843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:61016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.067 [2024-11-19 00:06:55.214862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:08.067 [2024-11-19 00:06:55.214888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:61024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.067 [2024-11-19 00:06:55.214908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:08.067 [2024-11-19 00:06:55.214956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:61032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.067 [2024-11-19 00:06:55.214976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:08.067 [2024-11-19 00:06:55.215002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:61040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.067 [2024-11-19 00:06:55.215022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:08.067 [2024-11-19 00:06:55.215047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:61048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.067 [2024-11-19 00:06:55.215067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:08.067 [2024-11-19 00:06:55.215093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:60576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.067 [2024-11-19 00:06:55.215112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:08.067 [2024-11-19 00:06:55.215138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:60584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.067 [2024-11-19 00:06:55.215158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:08.067 [2024-11-19 00:06:55.215184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:60592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.067 [2024-11-19 00:06:55.215204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:08.067 [2024-11-19 00:06:55.215230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:60600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.067 [2024-11-19 00:06:55.215250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:08.067 [2024-11-19 00:06:55.215276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:60608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.067 [2024-11-19 00:06:55.215295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:08.067 [2024-11-19 00:06:55.215321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:60616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.067 [2024-11-19 00:06:55.215343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:08.067 [2024-11-19 00:06:55.215369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:60624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.067 [2024-11-19 00:06:55.215388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:08.067 [2024-11-19 00:06:55.215414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:60632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.067 [2024-11-19 00:06:55.215450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:08.067 [2024-11-19 00:06:55.215476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:60640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.067 [2024-11-19 00:06:55.215495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:08.067 [2024-11-19 00:06:55.215533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:60648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.067 [2024-11-19 00:06:55.215555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:08.067 [2024-11-19 00:06:55.215582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:60656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.067 [2024-11-19 00:06:55.215602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:08.067 [2024-11-19 00:06:55.215677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:60664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.067 [2024-11-19 00:06:55.215700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:08.067 [2024-11-19 00:06:55.215728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:60672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.067 [2024-11-19 00:06:55.215748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:08.067 [2024-11-19 00:06:55.215792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:60680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.067 [2024-11-19 00:06:55.215812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:08.067 [2024-11-19 00:06:55.215839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:60688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.067 [2024-11-19 00:06:55.215859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:08.067 [2024-11-19 00:06:55.215887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:60696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.067 [2024-11-19 00:06:55.215908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:08.067 [2024-11-19 00:06:55.215935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:61056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.067 [2024-11-19 00:06:55.215971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:08.067 [2024-11-19 00:06:55.215998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:61064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.067 [2024-11-19 00:06:55.216032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:08.067 [2024-11-19 00:06:55.216059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:61072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.067 [2024-11-19 00:06:55.216078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:08.067 [2024-11-19 00:06:55.216128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:61080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.067 [2024-11-19 00:06:55.216148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:08.067 [2024-11-19 00:06:55.216197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:61088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.067 [2024-11-19 00:06:55.216223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:08.068 [2024-11-19 00:06:55.216293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:61096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.068 [2024-11-19 00:06:55.216328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.068 [2024-11-19 00:06:55.216360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:61104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.068 [2024-11-19 00:06:55.216381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:08.068 [2024-11-19 00:06:55.216409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:61112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.068 [2024-11-19 00:06:55.216430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:08.068 [2024-11-19 00:06:55.216457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:61120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.068 [2024-11-19 00:06:55.216478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:08.068 [2024-11-19 00:06:55.216505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:61128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.068 [2024-11-19 00:06:55.216525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:08.068 [2024-11-19 00:06:55.216559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:61136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.068 [2024-11-19 00:06:55.216595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:08.068 [2024-11-19 00:06:55.216635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:61144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.068 [2024-11-19 00:06:55.216669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:08.068 [2024-11-19 00:06:55.216701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:61152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.068 [2024-11-19 00:06:55.216721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:08.068 [2024-11-19 00:06:55.216747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:61160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.068 [2024-11-19 00:06:55.216766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:08.068 [2024-11-19 00:06:55.216792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:61168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.068 [2024-11-19 00:06:55.216812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:08.068 [2024-11-19 00:06:55.216837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:61176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.068 [2024-11-19 00:06:55.216858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:08.068 [2024-11-19 00:06:55.216884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:61184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.068 [2024-11-19 00:06:55.216903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:08.068 [2024-11-19 00:06:55.216928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:61192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.068 [2024-11-19 00:06:55.216956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:08.068 [2024-11-19 00:06:55.216985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:61200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.068 [2024-11-19 00:06:55.217005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:08.068 [2024-11-19 00:06:55.217031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:60704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.068 [2024-11-19 00:06:55.217051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:08.068 [2024-11-19 00:06:55.217077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:60712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.068 [2024-11-19 00:06:55.217097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:08.068 [2024-11-19 00:06:55.217124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:60720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.068 [2024-11-19 00:06:55.217145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:08.068 [2024-11-19 00:06:55.217171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:60728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.068 [2024-11-19 00:06:55.217190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:08.068 [2024-11-19 00:06:55.217215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:60736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.068 [2024-11-19 00:06:55.217235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:08.068 [2024-11-19 00:06:55.217261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:60744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.068 [2024-11-19 00:06:55.217280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:08.068 [2024-11-19 00:06:55.217306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:60752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.068 [2024-11-19 00:06:55.217326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:08.068 [2024-11-19 00:06:55.217352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:60760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.068 [2024-11-19 00:06:55.217372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:08.068 [2024-11-19 00:06:55.217398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:61208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.068 [2024-11-19 00:06:55.217417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:08.068 [2024-11-19 00:06:55.217443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:61216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.068 [2024-11-19 00:06:55.217462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:08.068 [2024-11-19 00:06:55.217488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:61224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.068 [2024-11-19 00:06:55.217507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:08.068 [2024-11-19 00:06:55.217542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:61232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.068 [2024-11-19 00:06:55.217563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:08.068 [2024-11-19 00:06:55.217589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:61240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.068 [2024-11-19 00:06:55.217623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:08.068 [2024-11-19 00:06:55.217651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:61248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.068 [2024-11-19 00:06:55.217671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:08.068 [2024-11-19 00:06:55.217696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:61256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.068 [2024-11-19 00:06:55.217716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:08.068 [2024-11-19 00:06:55.217742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:61264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.068 [2024-11-19 00:06:55.217761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:08.068 [2024-11-19 00:06:55.217787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:61272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.068 [2024-11-19 00:06:55.217807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:08.068 [2024-11-19 00:06:55.217832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:61280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.068 [2024-11-19 00:06:55.217852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:08.068 [2024-11-19 00:06:55.217878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:61288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.068 [2024-11-19 00:06:55.217899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.068 [2024-11-19 00:06:55.217924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:61296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.068 [2024-11-19 00:06:55.217944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:08.068 [2024-11-19 00:06:55.217969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:61304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.068 [2024-11-19 00:06:55.217988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:08.068 [2024-11-19 00:06:55.218015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:61312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.068 [2024-11-19 00:06:55.218034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:08.068 [2024-11-19 00:06:55.218060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:61320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.068 [2024-11-19 00:06:55.218080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:08.068 [2024-11-19 00:06:55.218116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:61328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.069 [2024-11-19 00:06:55.218136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:08.069 [2024-11-19 00:06:55.218162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:61336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.069 [2024-11-19 00:06:55.218181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:08.069 [2024-11-19 00:06:55.218207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:60768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.069 [2024-11-19 00:06:55.218226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:08.069 [2024-11-19 00:06:55.218253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:60776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.069 [2024-11-19 00:06:55.218272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:08.069 [2024-11-19 00:06:55.218298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:60784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.069 [2024-11-19 00:06:55.218317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:08.069 [2024-11-19 00:06:55.218343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:60792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.069 [2024-11-19 00:06:55.218363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:08.069 [2024-11-19 00:06:55.218388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:60800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.069 [2024-11-19 00:06:55.218408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:08.069 [2024-11-19 00:06:55.218433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:60808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.069 [2024-11-19 00:06:55.218452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:08.069 [2024-11-19 00:06:55.218478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:60816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.069 [2024-11-19 00:06:55.218497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:08.069 [2024-11-19 00:06:55.218523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:60824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.069 [2024-11-19 00:06:55.218543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:08.069 [2024-11-19 00:06:55.218579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:61344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.069 [2024-11-19 00:06:55.218614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:08.069 [2024-11-19 00:06:55.218660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:61352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.069 [2024-11-19 00:06:55.218682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:08.069 [2024-11-19 00:06:55.218709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:61360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.069 [2024-11-19 00:06:55.218737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:08.069 [2024-11-19 00:06:55.218766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:61368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.069 [2024-11-19 00:06:55.218787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:08.069 [2024-11-19 00:06:55.218813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:61376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.069 [2024-11-19 00:06:55.218833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:08.069 [2024-11-19 00:06:55.218860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:61384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.069 [2024-11-19 00:06:55.218880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:08.069 [2024-11-19 00:06:55.218907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:61392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.069 [2024-11-19 00:06:55.218926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:08.069 [2024-11-19 00:06:55.218952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:61400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.069 [2024-11-19 00:06:55.218972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:08.069 [2024-11-19 00:06:55.218999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:60832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.069 [2024-11-19 00:06:55.219018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:08.069 [2024-11-19 00:06:55.219044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:60840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.069 [2024-11-19 00:06:55.219064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:08.069 [2024-11-19 00:06:55.219090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:60848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.069 [2024-11-19 00:06:55.219110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:08.069 [2024-11-19 00:06:55.219136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:60856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.069 [2024-11-19 00:06:55.219156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:08.069 [2024-11-19 00:06:55.219182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:60864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.069 [2024-11-19 00:06:55.219202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:08.069 [2024-11-19 00:06:55.219228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:60872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.069 [2024-11-19 00:06:55.219248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:08.069 [2024-11-19 00:06:55.219274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:60880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.069 [2024-11-19 00:06:55.219301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:08.069 [2024-11-19 00:06:55.219345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:60888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.069 [2024-11-19 00:06:55.219366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:08.069 [2024-11-19 00:06:55.219397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:60896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.069 [2024-11-19 00:06:55.219418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:08.069 [2024-11-19 00:06:55.219445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:60904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.069 [2024-11-19 00:06:55.219465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.069 [2024-11-19 00:06:55.219491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:60912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.069 [2024-11-19 00:06:55.219512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:08.069 [2024-11-19 00:06:55.219538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:60920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.069 [2024-11-19 00:06:55.219558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:08.069 [2024-11-19 00:06:55.219584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:60928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.069 [2024-11-19 00:06:55.219604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:08.069 [2024-11-19 00:06:55.219663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:60936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.069 [2024-11-19 00:06:55.219685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:08.069 [2024-11-19 00:06:55.219713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:60944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.069 [2024-11-19 00:06:55.219733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:08.069 [2024-11-19 00:06:55.220722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:60952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.069 [2024-11-19 00:06:55.220762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:08.069 [2024-11-19 00:06:55.220807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:61408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.069 [2024-11-19 00:06:55.220830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:08.069 [2024-11-19 00:06:55.220865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:61416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.069 [2024-11-19 00:06:55.220886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:08.069 [2024-11-19 00:06:55.220920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:61424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.069 [2024-11-19 00:06:55.220941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:08.069 [2024-11-19 00:06:55.220990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:61432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.069 [2024-11-19 00:06:55.221013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:08.069 [2024-11-19 00:06:55.221060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:61440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.070 [2024-11-19 00:06:55.221081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:08.070 [2024-11-19 00:06:55.221114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:61448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.070 [2024-11-19 00:06:55.221134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:08.070 [2024-11-19 00:06:55.221168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:61456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.070 [2024-11-19 00:06:55.221188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:08.070 [2024-11-19 00:06:55.221238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:61464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.070 [2024-11-19 00:06:55.221263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:08.070 [2024-11-19 00:06:55.221301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:61472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.070 [2024-11-19 00:06:55.221322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:08.070 [2024-11-19 00:06:55.221355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:61480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.070 [2024-11-19 00:06:55.221375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:08.070 [2024-11-19 00:06:55.221408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:61488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.070 [2024-11-19 00:06:55.221428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:08.070 [2024-11-19 00:06:55.221477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:61496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.070 [2024-11-19 00:06:55.221498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:08.070 [2024-11-19 00:06:55.221531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:61504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.070 [2024-11-19 00:06:55.221551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:08.070 [2024-11-19 00:06:55.221585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:61512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.070 [2024-11-19 00:06:55.221605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:08.070 [2024-11-19 00:06:55.221656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:61520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.070 [2024-11-19 00:06:55.221681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:08.070 [2024-11-19 00:06:55.221745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:61528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.070 [2024-11-19 00:06:55.221772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:08.070 [2024-11-19 00:06:55.221807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:61536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.070 [2024-11-19 00:06:55.221829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:08.070 [2024-11-19 00:06:55.221862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:61544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.070 [2024-11-19 00:06:55.221882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:08.070 [2024-11-19 00:06:55.221916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:61552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.070 [2024-11-19 00:06:55.221937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:08.070 [2024-11-19 00:06:55.221970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:61560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.070 [2024-11-19 00:06:55.221991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:08.070 [2024-11-19 00:06:55.222025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:61568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.070 [2024-11-19 00:06:55.222045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:08.070 [2024-11-19 00:06:55.222079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:61576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.070 [2024-11-19 00:06:55.222099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:08.070 [2024-11-19 00:06:55.222132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:61584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.070 [2024-11-19 00:06:55.222153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:08.070 [2024-11-19 00:06:55.222186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:61592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.070 [2024-11-19 00:06:55.222207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:08.070 8150.67 IOPS, 31.84 MiB/s [2024-11-19T00:07:14.762Z] 7641.25 IOPS, 29.85 MiB/s [2024-11-19T00:07:14.762Z] 7191.76 IOPS, 28.09 MiB/s [2024-11-19T00:07:14.762Z] 6792.22 IOPS, 26.53 MiB/s [2024-11-19T00:07:14.762Z] 6547.63 IOPS, 25.58 MiB/s [2024-11-19T00:07:14.762Z] 6631.45 IOPS, 25.90 MiB/s [2024-11-19T00:07:14.762Z] 6713.52 IOPS, 26.22 MiB/s [2024-11-19T00:07:14.762Z] 6938.68 IOPS, 27.10 MiB/s [2024-11-19T00:07:14.762Z] 7144.26 IOPS, 27.91 MiB/s [2024-11-19T00:07:14.762Z] 7321.42 IOPS, 28.60 MiB/s [2024-11-19T00:07:14.762Z] 7369.68 IOPS, 28.79 MiB/s [2024-11-19T00:07:14.762Z] 7407.46 IOPS, 28.94 MiB/s [2024-11-19T00:07:14.762Z] 7435.04 IOPS, 29.04 MiB/s [2024-11-19T00:07:14.762Z] 7534.11 IOPS, 29.43 MiB/s [2024-11-19T00:07:14.762Z] 7677.31 IOPS, 29.99 MiB/s [2024-11-19T00:07:14.762Z] 7807.30 IOPS, 30.50 MiB/s [2024-11-19T00:07:14.762Z] [2024-11-19 00:07:11.056794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:92872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.070 [2024-11-19 00:07:11.056885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:08.070 [2024-11-19 00:07:11.056960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:92888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.070 [2024-11-19 00:07:11.057007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:08.070 [2024-11-19 00:07:11.057041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:92368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.070 [2024-11-19 00:07:11.057062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:08.070 [2024-11-19 00:07:11.057089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:92528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.070 [2024-11-19 00:07:11.057109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:08.070 [2024-11-19 00:07:11.057136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:92408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.070 [2024-11-19 00:07:11.057155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:08.070 [2024-11-19 00:07:11.057181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:92904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.070 [2024-11-19 00:07:11.057201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:08.070 [2024-11-19 00:07:11.057227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:92920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.070 [2024-11-19 00:07:11.057246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:08.070 [2024-11-19 00:07:11.057272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:92936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.070 [2024-11-19 00:07:11.057292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:08.070 [2024-11-19 00:07:11.057318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:92952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.070 [2024-11-19 00:07:11.057337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:08.070 [2024-11-19 00:07:11.057363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:92968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.070 [2024-11-19 00:07:11.057382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:08.070 [2024-11-19 00:07:11.057408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:92984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.070 [2024-11-19 00:07:11.057427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:08.070 [2024-11-19 00:07:11.057453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:92456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.070 [2024-11-19 00:07:11.057473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.070 [2024-11-19 00:07:11.057499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:92992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.070 [2024-11-19 00:07:11.057518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.070 [2024-11-19 00:07:11.057544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:93008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.070 [2024-11-19 00:07:11.057573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:08.070 [2024-11-19 00:07:11.057645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:93024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.070 [2024-11-19 00:07:11.057694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:08.071 [2024-11-19 00:07:11.057724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:92544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.071 [2024-11-19 00:07:11.057745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:08.071 [2024-11-19 00:07:11.057773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:92576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.071 [2024-11-19 00:07:11.057794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:08.071 [2024-11-19 00:07:11.057823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:92608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.071 [2024-11-19 00:07:11.057844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:08.071 [2024-11-19 00:07:11.057871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:92496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.071 [2024-11-19 00:07:11.057891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:08.071 [2024-11-19 00:07:11.057919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:92536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.071 [2024-11-19 00:07:11.057939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:08.071 [2024-11-19 00:07:11.057967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:93040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.071 [2024-11-19 00:07:11.058001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:08.071 [2024-11-19 00:07:11.058027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:93056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.071 [2024-11-19 00:07:11.058047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:08.071 [2024-11-19 00:07:11.058073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:93072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.071 [2024-11-19 00:07:11.058093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:08.071 [2024-11-19 00:07:11.058118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:93088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.071 [2024-11-19 00:07:11.058138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:08.071 [2024-11-19 00:07:11.058164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:92568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.071 [2024-11-19 00:07:11.058184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:08.071 [2024-11-19 00:07:11.058211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:92600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.071 [2024-11-19 00:07:11.058230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:08.071 [2024-11-19 00:07:11.058269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:92656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.071 [2024-11-19 00:07:11.058290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:08.071 [2024-11-19 00:07:11.058317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:92688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.071 [2024-11-19 00:07:11.058336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:08.071 [2024-11-19 00:07:11.058363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:92712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.071 [2024-11-19 00:07:11.058382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:08.071 [2024-11-19 00:07:11.058408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:93104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.071 [2024-11-19 00:07:11.058428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:08.071 [2024-11-19 00:07:11.058455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:93120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.071 [2024-11-19 00:07:11.058475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:08.071 [2024-11-19 00:07:11.058501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:93136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.071 [2024-11-19 00:07:11.058521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:08.071 [2024-11-19 00:07:11.058548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:93152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.071 [2024-11-19 00:07:11.058567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:08.071 [2024-11-19 00:07:11.058595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:93168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.071 [2024-11-19 00:07:11.058629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:08.071 [2024-11-19 00:07:11.058676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:93184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.071 [2024-11-19 00:07:11.058696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:08.071 [2024-11-19 00:07:11.058724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:93200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.071 [2024-11-19 00:07:11.058744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:08.071 [2024-11-19 00:07:11.058771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:93216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.071 [2024-11-19 00:07:11.058791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:08.071 [2024-11-19 00:07:11.058818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:92616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.071 [2024-11-19 00:07:11.058838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:08.071 [2024-11-19 00:07:11.058871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:92784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.072 [2024-11-19 00:07:11.058899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:08.072 [2024-11-19 00:07:11.058929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.072 [2024-11-19 00:07:11.058949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:08.072 [2024-11-19 00:07:11.058991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:92848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.072 [2024-11-19 00:07:11.059011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:08.072 [2024-11-19 00:07:11.059037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:93224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.072 [2024-11-19 00:07:11.059057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:08.072 [2024-11-19 00:07:11.059083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:93240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.072 [2024-11-19 00:07:11.059102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:08.072 [2024-11-19 00:07:11.059128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:93256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.072 [2024-11-19 00:07:11.059148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:08.072 [2024-11-19 00:07:11.059174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:93272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.072 [2024-11-19 00:07:11.059193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.072 [2024-11-19 00:07:11.059219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:92880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.072 [2024-11-19 00:07:11.059239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:08.072 [2024-11-19 00:07:11.059265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:92912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.072 [2024-11-19 00:07:11.059284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:08.072 [2024-11-19 00:07:11.059311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:92944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.072 [2024-11-19 00:07:11.059330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:08.072 [2024-11-19 00:07:11.059356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:92976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.072 [2024-11-19 00:07:11.059376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:08.072 [2024-11-19 00:07:11.059402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:93016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.072 [2024-11-19 00:07:11.059422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:08.072 [2024-11-19 00:07:11.059449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:92632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.072 [2024-11-19 00:07:11.059479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:08.072 [2024-11-19 00:07:11.059507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:92664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.072 [2024-11-19 00:07:11.059527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:08.072 [2024-11-19 00:07:11.059554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:92704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.072 [2024-11-19 00:07:11.059574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:08.072 [2024-11-19 00:07:11.059601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:92728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.072 [2024-11-19 00:07:11.059620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:08.072 [2024-11-19 00:07:11.059664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:93296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.072 [2024-11-19 00:07:11.059685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:08.072 [2024-11-19 00:07:11.059712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:93312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.072 [2024-11-19 00:07:11.059731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:08.072 [2024-11-19 00:07:11.059759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:93328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.072 [2024-11-19 00:07:11.059779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:08.072 [2024-11-19 00:07:11.059828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:93344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.072 [2024-11-19 00:07:11.059853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:08.072 [2024-11-19 00:07:11.059881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:93360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.072 [2024-11-19 00:07:11.059901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:08.072 [2024-11-19 00:07:11.059928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:93376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.072 [2024-11-19 00:07:11.059947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:08.072 [2024-11-19 00:07:11.059974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:93392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.072 [2024-11-19 00:07:11.059994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:08.072 [2024-11-19 00:07:11.060020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:92744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.072 [2024-11-19 00:07:11.060040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:08.072 [2024-11-19 00:07:11.062576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:93064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.072 [2024-11-19 00:07:11.062662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:08.072 [2024-11-19 00:07:11.062719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:93096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.072 [2024-11-19 00:07:11.062744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:08.072 [2024-11-19 00:07:11.062775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:93416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.072 [2024-11-19 00:07:11.062796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:08.072 [2024-11-19 00:07:11.062825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:93432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.072 [2024-11-19 00:07:11.062847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:08.072 [2024-11-19 00:07:11.062874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:93448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.072 [2024-11-19 00:07:11.062896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:08.072 [2024-11-19 00:07:11.062924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:93464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.072 [2024-11-19 00:07:11.062945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:08.072 [2024-11-19 00:07:11.062987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:93480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.072 [2024-11-19 00:07:11.063022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:08.072 [2024-11-19 00:07:11.063049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:93112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.072 [2024-11-19 00:07:11.063069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:08.072 [2024-11-19 00:07:11.063095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:93144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.072 [2024-11-19 00:07:11.063115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:08.072 [2024-11-19 00:07:11.063141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:93176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.072 [2024-11-19 00:07:11.063161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:08.072 [2024-11-19 00:07:11.063187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:93208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.072 [2024-11-19 00:07:11.063207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:08.072 [2024-11-19 00:07:11.063233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:92776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.072 [2024-11-19 00:07:11.063253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:08.072 [2024-11-19 00:07:11.063279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:92808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.072 [2024-11-19 00:07:11.063299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:08.072 [2024-11-19 00:07:11.063336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:92840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.072 [2024-11-19 00:07:11.063357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:08.072 [2024-11-19 00:07:11.063384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:93496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.073 [2024-11-19 00:07:11.063404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.073 [2024-11-19 00:07:11.063431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:93512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.073 [2024-11-19 00:07:11.063451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:08.073 [2024-11-19 00:07:11.063477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:93528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.073 [2024-11-19 00:07:11.063513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:08.073 [2024-11-19 00:07:11.063542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:93544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.073 [2024-11-19 00:07:11.063564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:08.073 7886.48 IOPS, 30.81 MiB/s [2024-11-19T00:07:14.765Z] 7902.50 IOPS, 30.87 MiB/s [2024-11-19T00:07:14.765Z] 7911.27 IOPS, 30.90 MiB/s [2024-11-19T00:07:14.765Z] Received shutdown signal, test time was about 33.279200 seconds 00:23:08.073 00:23:08.073 Latency(us) 00:23:08.073 [2024-11-19T00:07:14.765Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:08.073 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:08.073 Verification LBA range: start 0x0 length 0x4000 00:23:08.073 Nvme0n1 : 33.28 7910.48 30.90 0.00 0.00 16149.49 1303.27 4026531.84 00:23:08.073 [2024-11-19T00:07:14.765Z] =================================================================================================================== 00:23:08.073 [2024-11-19T00:07:14.765Z] Total : 7910.48 30.90 0.00 0.00 16149.49 1303.27 4026531.84 00:23:08.073 00:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:08.333 00:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:23:08.333 00:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:23:08.333 00:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:23:08.333 00:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:08.333 00:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:23:08.333 00:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:08.333 00:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:23:08.333 00:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:08.333 00:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:08.333 rmmod nvme_tcp 00:23:08.333 rmmod nvme_fabrics 00:23:08.333 rmmod nvme_keyring 00:23:08.333 00:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:08.333 00:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:23:08.333 00:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:23:08.333 00:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 82417 ']' 00:23:08.333 00:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 82417 00:23:08.333 00:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 82417 ']' 00:23:08.333 00:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 82417 00:23:08.333 00:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:23:08.333 00:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:08.333 00:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82417 00:23:08.333 killing process with pid 82417 00:23:08.333 00:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:08.333 00:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:08.333 00:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82417' 00:23:08.333 00:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 82417 00:23:08.333 00:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 82417 00:23:09.271 00:07:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:09.271 00:07:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:09.271 00:07:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:09.271 00:07:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:23:09.271 00:07:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:23:09.271 00:07:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:23:09.271 00:07:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:09.271 00:07:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:09.271 00:07:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:23:09.271 00:07:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:23:09.271 00:07:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:23:09.531 00:07:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:23:09.531 00:07:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:23:09.531 00:07:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:23:09.531 00:07:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:23:09.531 00:07:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:23:09.531 00:07:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:23:09.531 00:07:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:23:09.531 00:07:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:23:09.531 00:07:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:23:09.531 00:07:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:09.531 00:07:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:09.531 00:07:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@246 -- # remove_spdk_ns 00:23:09.531 00:07:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:09.531 00:07:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:09.531 00:07:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:09.531 00:07:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@300 -- # return 0 00:23:09.531 00:23:09.531 real 0m40.888s 00:23:09.531 user 2m10.804s 00:23:09.531 sys 0m10.332s 00:23:09.531 00:07:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:09.531 00:07:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:09.531 ************************************ 00:23:09.531 END TEST nvmf_host_multipath_status 00:23:09.531 ************************************ 00:23:09.531 00:07:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:23:09.531 00:07:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:09.531 00:07:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:09.531 00:07:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.791 ************************************ 00:23:09.791 START TEST nvmf_discovery_remove_ifc 00:23:09.791 ************************************ 00:23:09.791 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:23:09.791 * Looking for test storage... 00:23:09.791 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:09.791 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:09.791 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:23:09.791 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:09.791 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:09.791 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:09.791 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:09.791 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:09.791 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:23:09.791 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:23:09.791 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:23:09.791 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:23:09.791 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:23:09.791 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:23:09.791 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:23:09.791 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:09.791 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:23:09.792 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:23:09.792 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:09.792 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:09.792 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:23:09.792 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:23:09.792 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:09.792 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:23:09.792 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:23:09.792 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:23:09.792 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:23:09.792 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:09.792 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:23:09.792 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:23:09.792 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:09.792 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:09.792 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:23:09.792 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:09.792 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:09.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:09.792 --rc genhtml_branch_coverage=1 00:23:09.792 --rc genhtml_function_coverage=1 00:23:09.792 --rc genhtml_legend=1 00:23:09.792 --rc geninfo_all_blocks=1 00:23:09.792 --rc geninfo_unexecuted_blocks=1 00:23:09.792 00:23:09.792 ' 00:23:09.792 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:09.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:09.792 --rc genhtml_branch_coverage=1 00:23:09.792 --rc genhtml_function_coverage=1 00:23:09.792 --rc genhtml_legend=1 00:23:09.792 --rc geninfo_all_blocks=1 00:23:09.792 --rc geninfo_unexecuted_blocks=1 00:23:09.792 00:23:09.792 ' 00:23:09.792 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:09.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:09.792 --rc genhtml_branch_coverage=1 00:23:09.792 --rc genhtml_function_coverage=1 00:23:09.792 --rc genhtml_legend=1 00:23:09.792 --rc geninfo_all_blocks=1 00:23:09.792 --rc geninfo_unexecuted_blocks=1 00:23:09.792 00:23:09.792 ' 00:23:09.792 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:09.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:09.792 --rc genhtml_branch_coverage=1 00:23:09.792 --rc genhtml_function_coverage=1 00:23:09.792 --rc genhtml_legend=1 00:23:09.792 --rc geninfo_all_blocks=1 00:23:09.792 --rc geninfo_unexecuted_blocks=1 00:23:09.792 00:23:09.792 ' 00:23:09.792 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:09.792 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:23:09.792 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:09.792 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:09.792 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:09.792 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:09.792 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:09.792 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:09.792 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:09.792 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:09.792 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:09.792 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:09.792 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:23:09.792 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:23:09.792 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:09.792 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:09.792 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:09.792 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:09.792 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:09.792 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:23:09.792 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:09.792 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:09.792 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:09.792 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:09.792 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:09.792 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:09.792 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:23:09.792 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:09.792 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:23:09.793 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:09.793 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:09.793 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:09.793 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:09.793 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:09.793 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:09.793 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:09.793 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:09.793 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:09.793 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:09.793 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:23:09.793 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:23:09.793 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:23:09.793 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:23:09.793 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:23:09.793 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:23:09.793 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:23:09.793 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:09.793 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:09.793 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:09.793 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:09.793 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:09.793 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:09.793 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:09.793 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:09.793 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:23:09.793 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:23:09.793 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:23:09.793 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:23:09.793 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:23:09.793 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@460 -- # nvmf_veth_init 00:23:09.793 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:09.793 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:23:09.793 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:23:09.793 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:23:09.793 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:09.793 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:23:09.793 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:09.793 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:23:09.793 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:09.793 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:23:09.793 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:09.793 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:09.793 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:09.793 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:09.793 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:09.793 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:09.793 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:23:09.793 Cannot find device "nvmf_init_br" 00:23:09.793 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:23:09.793 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:23:09.793 Cannot find device "nvmf_init_br2" 00:23:09.793 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:23:09.793 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:23:10.052 Cannot find device "nvmf_tgt_br" 00:23:10.052 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # true 00:23:10.052 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:23:10.052 Cannot find device "nvmf_tgt_br2" 00:23:10.052 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # true 00:23:10.052 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:23:10.052 Cannot find device "nvmf_init_br" 00:23:10.053 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # true 00:23:10.053 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:23:10.053 Cannot find device "nvmf_init_br2" 00:23:10.053 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # true 00:23:10.053 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:23:10.053 Cannot find device "nvmf_tgt_br" 00:23:10.053 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # true 00:23:10.053 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:23:10.053 Cannot find device "nvmf_tgt_br2" 00:23:10.053 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # true 00:23:10.053 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:23:10.053 Cannot find device "nvmf_br" 00:23:10.053 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # true 00:23:10.053 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:23:10.053 Cannot find device "nvmf_init_if" 00:23:10.053 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # true 00:23:10.053 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:23:10.053 Cannot find device "nvmf_init_if2" 00:23:10.053 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # true 00:23:10.053 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:10.053 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:10.053 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # true 00:23:10.053 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:10.053 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:10.053 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # true 00:23:10.053 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:23:10.053 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:10.053 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:23:10.053 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:10.053 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:10.053 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:10.053 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:10.053 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:10.053 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:23:10.053 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:23:10.053 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:23:10.053 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:23:10.053 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:23:10.053 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:23:10.053 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:23:10.053 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:23:10.053 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:23:10.053 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:10.053 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:10.313 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:10.313 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:23:10.313 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:23:10.313 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:23:10.313 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:23:10.313 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:10.313 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:10.313 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:10.313 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:23:10.313 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:23:10.313 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:23:10.313 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:10.313 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:23:10.313 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:23:10.313 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:10.313 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.073 ms 00:23:10.313 00:23:10.313 --- 10.0.0.3 ping statistics --- 00:23:10.313 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:10.313 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:23:10.313 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:23:10.313 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:23:10.313 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.041 ms 00:23:10.313 00:23:10.313 --- 10.0.0.4 ping statistics --- 00:23:10.313 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:10.313 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:23:10.313 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:10.313 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:10.313 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 00:23:10.313 00:23:10.313 --- 10.0.0.1 ping statistics --- 00:23:10.313 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:10.313 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 00:23:10.313 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:23:10.313 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:10.313 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:23:10.313 00:23:10.313 --- 10.0.0.2 ping statistics --- 00:23:10.313 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:10.313 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:23:10.313 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:10.313 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@461 -- # return 0 00:23:10.313 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:10.313 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:10.313 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:10.313 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:10.313 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:10.313 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:10.313 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:10.313 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:23:10.313 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:10.313 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:10.313 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:10.313 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=83315 00:23:10.313 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:10.313 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 83315 00:23:10.313 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 83315 ']' 00:23:10.313 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:10.313 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:10.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:10.313 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:10.313 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:10.313 00:07:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:10.572 [2024-11-19 00:07:17.013334] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:23:10.572 [2024-11-19 00:07:17.013514] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:10.572 [2024-11-19 00:07:17.203432] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:10.832 [2024-11-19 00:07:17.327812] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:10.832 [2024-11-19 00:07:17.327889] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:10.832 [2024-11-19 00:07:17.327914] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:10.832 [2024-11-19 00:07:17.327947] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:10.832 [2024-11-19 00:07:17.327967] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:10.832 [2024-11-19 00:07:17.329402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:11.092 [2024-11-19 00:07:17.522771] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:11.351 00:07:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:11.351 00:07:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:23:11.351 00:07:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:11.351 00:07:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:11.351 00:07:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:11.351 00:07:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:11.351 00:07:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:23:11.351 00:07:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.351 00:07:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:11.351 [2024-11-19 00:07:17.971646] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:11.351 [2024-11-19 00:07:17.979834] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:23:11.351 null0 00:23:11.351 [2024-11-19 00:07:18.011726] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:11.351 00:07:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.351 00:07:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=83347 00:23:11.351 00:07:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:23:11.351 00:07:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 83347 /tmp/host.sock 00:23:11.351 00:07:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 83347 ']' 00:23:11.351 00:07:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:23:11.351 00:07:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:11.351 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:23:11.351 00:07:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:23:11.351 00:07:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:11.351 00:07:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:11.611 [2024-11-19 00:07:18.154510] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:23:11.611 [2024-11-19 00:07:18.154703] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83347 ] 00:23:11.870 [2024-11-19 00:07:18.339918] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:11.870 [2024-11-19 00:07:18.461915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:12.438 00:07:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:12.438 00:07:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:23:12.438 00:07:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:12.438 00:07:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:23:12.438 00:07:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.438 00:07:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:12.438 00:07:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.438 00:07:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:23:12.438 00:07:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.438 00:07:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:12.696 [2024-11-19 00:07:19.155252] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:12.696 00:07:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.696 00:07:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:23:12.696 00:07:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.696 00:07:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:13.631 [2024-11-19 00:07:20.255976] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:23:13.631 [2024-11-19 00:07:20.256033] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:23:13.631 [2024-11-19 00:07:20.256076] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:13.631 [2024-11-19 00:07:20.262034] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:23:13.890 [2024-11-19 00:07:20.324695] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:23:13.890 [2024-11-19 00:07:20.326050] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x61500002b500:1 started. 00:23:13.890 [2024-11-19 00:07:20.327985] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:23:13.891 [2024-11-19 00:07:20.328063] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:23:13.891 [2024-11-19 00:07:20.328123] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:23:13.891 [2024-11-19 00:07:20.328149] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:23:13.891 [2024-11-19 00:07:20.328189] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:23:13.891 00:07:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.891 00:07:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:23:13.891 00:07:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:13.891 00:07:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:13.891 00:07:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:13.891 00:07:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.891 00:07:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:13.891 00:07:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:13.891 [2024-11-19 00:07:20.335236] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x61500002b500 was disconnected and freed. delete nvme_qpair. 00:23:13.891 00:07:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:13.891 00:07:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.891 00:07:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:23:13.891 00:07:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.3/24 dev nvmf_tgt_if 00:23:13.891 00:07:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:23:13.891 00:07:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:23:13.891 00:07:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:13.891 00:07:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:13.891 00:07:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:13.891 00:07:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:13.891 00:07:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.891 00:07:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:13.891 00:07:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:13.891 00:07:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.891 00:07:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:13.891 00:07:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:14.826 00:07:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:14.826 00:07:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:14.826 00:07:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.826 00:07:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:14.826 00:07:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:14.826 00:07:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:14.826 00:07:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:14.826 00:07:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.085 00:07:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:15.085 00:07:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:16.022 00:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:16.022 00:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:16.022 00:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.022 00:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:16.022 00:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:16.022 00:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:16.022 00:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:16.022 00:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.022 00:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:16.022 00:07:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:16.959 00:07:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:16.959 00:07:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:16.959 00:07:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.959 00:07:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:16.959 00:07:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:16.959 00:07:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:16.959 00:07:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:16.959 00:07:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.959 00:07:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:16.959 00:07:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:18.336 00:07:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:18.336 00:07:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:18.336 00:07:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.336 00:07:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:18.336 00:07:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:18.336 00:07:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:18.336 00:07:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:18.336 00:07:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.336 00:07:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:18.336 00:07:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:19.273 00:07:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:19.273 00:07:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:19.273 00:07:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:19.273 00:07:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:19.273 00:07:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:19.273 00:07:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:19.273 00:07:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:19.273 00:07:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:19.273 [2024-11-19 00:07:25.755541] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:23:19.273 [2024-11-19 00:07:25.755673] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:19.273 [2024-11-19 00:07:25.755697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.273 [2024-11-19 00:07:25.755714] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:19.273 [2024-11-19 00:07:25.755727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.273 [2024-11-19 00:07:25.755740] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:19.273 [2024-11-19 00:07:25.755752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.273 [2024-11-19 00:07:25.755765] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:19.273 [2024-11-19 00:07:25.755776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.273 [2024-11-19 00:07:25.755794] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:19.273 [2024-11-19 00:07:25.755816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.273 [2024-11-19 00:07:25.755835] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b000 is same with the state(6) to be set 00:23:19.273 00:07:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:19.273 00:07:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:19.273 [2024-11-19 00:07:25.765534] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002b000 (9): Bad file descriptor 00:23:19.274 [2024-11-19 00:07:25.775547] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:19.274 [2024-11-19 00:07:25.775637] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:19.274 [2024-11-19 00:07:25.775650] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:19.274 [2024-11-19 00:07:25.775667] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:19.274 [2024-11-19 00:07:25.775743] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:20.212 00:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:20.212 00:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:20.212 00:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.212 00:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:20.212 00:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:20.212 00:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:20.212 00:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:20.212 [2024-11-19 00:07:26.787714] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:23:20.212 [2024-11-19 00:07:26.788017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002b000 with addr=10.0.0.3, port=4420 00:23:20.212 [2024-11-19 00:07:26.788309] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b000 is same with the state(6) to be set 00:23:20.212 [2024-11-19 00:07:26.788731] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002b000 (9): Bad file descriptor 00:23:20.212 [2024-11-19 00:07:26.790131] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:23:20.212 [2024-11-19 00:07:26.790242] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:20.212 [2024-11-19 00:07:26.790275] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:20.212 [2024-11-19 00:07:26.790310] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:20.212 [2024-11-19 00:07:26.790334] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:20.212 [2024-11-19 00:07:26.790353] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:20.212 [2024-11-19 00:07:26.790367] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:20.212 [2024-11-19 00:07:26.790390] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:20.212 [2024-11-19 00:07:26.790412] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:20.212 00:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.212 00:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:20.212 00:07:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:21.149 [2024-11-19 00:07:27.790512] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:21.149 [2024-11-19 00:07:27.790575] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:21.149 [2024-11-19 00:07:27.790626] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:21.149 [2024-11-19 00:07:27.790641] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:21.149 [2024-11-19 00:07:27.790654] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:23:21.149 [2024-11-19 00:07:27.790667] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:21.149 [2024-11-19 00:07:27.790676] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:21.149 [2024-11-19 00:07:27.790683] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:21.149 [2024-11-19 00:07:27.790734] bdev_nvme.c:7135:remove_discovery_entry: *INFO*: Discovery[10.0.0.3:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 00:23:21.149 [2024-11-19 00:07:27.790784] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.149 [2024-11-19 00:07:27.790804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.149 [2024-11-19 00:07:27.790837] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.149 [2024-11-19 00:07:27.790876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.149 [2024-11-19 00:07:27.790899] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.149 [2024-11-19 00:07:27.790912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.149 [2024-11-19 00:07:27.790925] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.149 [2024-11-19 00:07:27.790936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.149 [2024-11-19 00:07:27.790949] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:21.149 [2024-11-19 00:07:27.790960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.149 [2024-11-19 00:07:27.790988] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:23:21.149 [2024-11-19 00:07:27.791016] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:23:21.149 [2024-11-19 00:07:27.791611] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:23:21.149 [2024-11-19 00:07:27.791654] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:23:21.149 00:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:21.149 00:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:21.149 00:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:21.149 00:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:21.149 00:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.149 00:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:21.149 00:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:21.408 00:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.408 00:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:23:21.408 00:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:23:21.408 00:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:21.408 00:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:23:21.408 00:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:21.408 00:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:21.408 00:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.408 00:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:21.408 00:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:21.408 00:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:21.408 00:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:21.408 00:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.408 00:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:23:21.408 00:07:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:22.346 00:07:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:22.346 00:07:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:22.346 00:07:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:22.346 00:07:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.346 00:07:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:22.346 00:07:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:22.346 00:07:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:22.346 00:07:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.346 00:07:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:23:22.346 00:07:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:23.283 [2024-11-19 00:07:29.800816] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:23:23.283 [2024-11-19 00:07:29.800865] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:23:23.283 [2024-11-19 00:07:29.800896] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:23.283 [2024-11-19 00:07:29.806880] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme1 00:23:23.283 [2024-11-19 00:07:29.861484] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4420 00:23:23.283 [2024-11-19 00:07:29.862733] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x61500002c180:1 started. 00:23:23.283 [2024-11-19 00:07:29.864821] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:23:23.283 [2024-11-19 00:07:29.864900] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:23:23.283 [2024-11-19 00:07:29.864959] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:23:23.283 [2024-11-19 00:07:29.864986] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme1 done 00:23:23.283 [2024-11-19 00:07:29.865001] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:23:23.283 [2024-11-19 00:07:29.869993] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x61500002c180 was disconnected and freed. delete nvme_qpair. 00:23:23.542 00:07:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:23.542 00:07:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:23.542 00:07:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:23.542 00:07:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:23.542 00:07:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.542 00:07:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:23.542 00:07:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:23.542 00:07:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.542 00:07:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:23:23.542 00:07:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:23:23.542 00:07:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 83347 00:23:23.542 00:07:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 83347 ']' 00:23:23.542 00:07:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 83347 00:23:23.542 00:07:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:23:23.542 00:07:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:23.542 00:07:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83347 00:23:23.542 killing process with pid 83347 00:23:23.542 00:07:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:23.542 00:07:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:23.542 00:07:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83347' 00:23:23.542 00:07:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 83347 00:23:23.542 00:07:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 83347 00:23:24.480 00:07:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:23:24.480 00:07:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:24.480 00:07:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:23:24.480 00:07:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:24.480 00:07:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:23:24.480 00:07:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:24.480 00:07:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:24.480 rmmod nvme_tcp 00:23:24.480 rmmod nvme_fabrics 00:23:24.480 rmmod nvme_keyring 00:23:24.480 00:07:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:24.480 00:07:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:23:24.480 00:07:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:23:24.480 00:07:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 83315 ']' 00:23:24.480 00:07:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 83315 00:23:24.480 00:07:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 83315 ']' 00:23:24.480 00:07:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 83315 00:23:24.480 00:07:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:23:24.480 00:07:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:24.480 00:07:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83315 00:23:24.480 killing process with pid 83315 00:23:24.480 00:07:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:24.480 00:07:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:24.480 00:07:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83315' 00:23:24.480 00:07:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 83315 00:23:24.480 00:07:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 83315 00:23:25.474 00:07:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:25.474 00:07:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:25.474 00:07:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:25.474 00:07:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:23:25.474 00:07:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:23:25.474 00:07:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:25.474 00:07:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:23:25.474 00:07:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:25.474 00:07:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:23:25.474 00:07:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:23:25.474 00:07:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:23:25.474 00:07:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:23:25.474 00:07:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:23:25.474 00:07:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:23:25.474 00:07:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:23:25.474 00:07:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:23:25.474 00:07:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:23:25.474 00:07:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:23:25.475 00:07:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:23:25.475 00:07:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:23:25.475 00:07:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:25.475 00:07:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:25.475 00:07:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@246 -- # remove_spdk_ns 00:23:25.475 00:07:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:25.475 00:07:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:25.475 00:07:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:25.475 00:07:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@300 -- # return 0 00:23:25.475 00:23:25.475 real 0m15.898s 00:23:25.475 user 0m26.642s 00:23:25.475 sys 0m2.586s 00:23:25.475 00:07:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:25.475 ************************************ 00:23:25.475 END TEST nvmf_discovery_remove_ifc 00:23:25.475 00:07:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:25.475 ************************************ 00:23:25.763 00:07:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:23:25.763 00:07:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:25.763 00:07:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:25.763 00:07:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.763 ************************************ 00:23:25.763 START TEST nvmf_identify_kernel_target 00:23:25.763 ************************************ 00:23:25.763 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:23:25.763 * Looking for test storage... 00:23:25.763 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:25.763 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:25.763 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:23:25.763 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:25.763 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:25.763 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:25.763 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:25.763 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:25.763 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:23:25.763 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:23:25.763 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:23:25.763 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:23:25.763 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:23:25.763 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:23:25.763 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:23:25.763 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:25.763 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:23:25.763 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:23:25.763 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:25.763 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:25.763 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:23:25.763 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:23:25.763 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:25.763 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:23:25.763 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:23:25.763 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:23:25.763 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:23:25.763 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:25.763 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:23:25.763 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:23:25.763 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:25.763 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:25.763 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:23:25.763 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:25.763 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:25.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:25.763 --rc genhtml_branch_coverage=1 00:23:25.763 --rc genhtml_function_coverage=1 00:23:25.763 --rc genhtml_legend=1 00:23:25.763 --rc geninfo_all_blocks=1 00:23:25.763 --rc geninfo_unexecuted_blocks=1 00:23:25.763 00:23:25.763 ' 00:23:25.763 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:25.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:25.763 --rc genhtml_branch_coverage=1 00:23:25.763 --rc genhtml_function_coverage=1 00:23:25.763 --rc genhtml_legend=1 00:23:25.763 --rc geninfo_all_blocks=1 00:23:25.763 --rc geninfo_unexecuted_blocks=1 00:23:25.763 00:23:25.763 ' 00:23:25.763 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:25.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:25.763 --rc genhtml_branch_coverage=1 00:23:25.763 --rc genhtml_function_coverage=1 00:23:25.763 --rc genhtml_legend=1 00:23:25.763 --rc geninfo_all_blocks=1 00:23:25.763 --rc geninfo_unexecuted_blocks=1 00:23:25.763 00:23:25.763 ' 00:23:25.763 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:25.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:25.763 --rc genhtml_branch_coverage=1 00:23:25.763 --rc genhtml_function_coverage=1 00:23:25.763 --rc genhtml_legend=1 00:23:25.763 --rc geninfo_all_blocks=1 00:23:25.763 --rc geninfo_unexecuted_blocks=1 00:23:25.763 00:23:25.763 ' 00:23:25.763 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:25.763 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:23:25.763 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:25.763 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:25.763 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:25.763 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:25.763 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:25.764 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:25.764 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:25.764 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:25.764 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:25.764 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:25.764 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:23:25.764 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:23:25.764 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:25.764 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:25.764 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:25.764 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:25.764 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:25.764 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:23:25.764 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:25.764 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:25.764 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:25.764 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.764 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.764 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.764 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:23:25.764 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.764 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:23:25.764 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:25.764 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:25.764 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:25.764 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:25.764 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:25.764 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:25.764 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:25.764 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:25.764 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:25.764 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:25.764 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:23:25.764 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:25.764 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:25.764 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:25.764 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:25.764 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:25.764 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:25.764 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:25.764 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:25.764 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:23:25.764 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:23:25.764 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:23:25.764 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:23:25.764 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:23:25.764 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:23:25.764 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:25.764 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:23:25.764 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:23:25.764 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:23:25.764 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:25.764 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:23:25.764 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:25.764 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:23:25.764 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:25.764 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:23:25.764 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:25.764 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:25.764 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:25.764 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:25.764 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:25.764 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:25.764 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:23:25.764 Cannot find device "nvmf_init_br" 00:23:25.764 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:23:25.764 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:23:25.764 Cannot find device "nvmf_init_br2" 00:23:25.764 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:23:25.764 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:23:25.764 Cannot find device "nvmf_tgt_br" 00:23:25.764 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # true 00:23:25.764 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:23:26.024 Cannot find device "nvmf_tgt_br2" 00:23:26.024 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # true 00:23:26.024 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:23:26.024 Cannot find device "nvmf_init_br" 00:23:26.024 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # true 00:23:26.024 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:23:26.024 Cannot find device "nvmf_init_br2" 00:23:26.024 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # true 00:23:26.024 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:23:26.024 Cannot find device "nvmf_tgt_br" 00:23:26.024 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # true 00:23:26.024 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:23:26.024 Cannot find device "nvmf_tgt_br2" 00:23:26.024 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # true 00:23:26.024 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:23:26.024 Cannot find device "nvmf_br" 00:23:26.024 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # true 00:23:26.024 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:23:26.024 Cannot find device "nvmf_init_if" 00:23:26.024 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # true 00:23:26.024 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:23:26.024 Cannot find device "nvmf_init_if2" 00:23:26.024 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # true 00:23:26.024 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:26.024 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:26.024 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # true 00:23:26.024 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:26.024 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:26.024 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # true 00:23:26.024 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:23:26.024 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:26.024 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:23:26.024 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:26.024 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:26.024 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:26.024 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:26.024 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:26.024 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:23:26.024 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:23:26.024 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:23:26.024 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:23:26.024 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:23:26.024 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:23:26.024 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:23:26.024 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:23:26.024 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:23:26.024 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:26.284 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:26.284 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:26.284 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:23:26.284 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:23:26.284 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:23:26.284 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:23:26.284 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:26.284 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:26.284 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:26.284 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:23:26.284 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:23:26.284 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:23:26.284 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:26.284 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:23:26.284 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:23:26.284 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:26.284 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:23:26.284 00:23:26.284 --- 10.0.0.3 ping statistics --- 00:23:26.284 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:26.284 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:23:26.284 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:23:26.284 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:23:26.284 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.040 ms 00:23:26.284 00:23:26.284 --- 10.0.0.4 ping statistics --- 00:23:26.284 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:26.284 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:23:26.284 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:26.284 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:26.284 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:23:26.284 00:23:26.284 --- 10.0.0.1 ping statistics --- 00:23:26.284 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:26.284 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:23:26.284 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:23:26.284 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:26.284 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:23:26.284 00:23:26.284 --- 10.0.0.2 ping statistics --- 00:23:26.284 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:26.284 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:23:26.284 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:26.284 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@461 -- # return 0 00:23:26.284 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:26.284 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:26.284 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:26.284 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:26.284 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:26.284 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:26.284 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:26.284 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:23:26.284 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:23:26.284 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:23:26.284 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:26.284 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:26.284 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:26.284 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:26.284 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:26.284 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:26.284 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:26.284 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:26.284 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:26.284 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:23:26.284 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:23:26.284 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:23:26.284 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:23:26.284 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:26.284 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:26.284 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:23:26.284 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:23:26.284 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:23:26.284 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:23:26.284 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:23:26.284 00:07:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:23:26.544 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:26.803 Waiting for block devices as requested 00:23:26.803 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:23:26.803 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:23:26.803 00:07:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:23:26.803 00:07:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:23:26.803 00:07:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:23:26.803 00:07:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:23:26.803 00:07:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:23:26.803 00:07:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:23:26.803 00:07:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:23:26.803 00:07:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:23:26.803 00:07:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:23:27.063 No valid GPT data, bailing 00:23:27.063 00:07:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:23:27.063 00:07:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:23:27.063 00:07:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:23:27.063 00:07:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:23:27.063 00:07:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:23:27.063 00:07:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:23:27.063 00:07:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:23:27.063 00:07:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:23:27.063 00:07:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:23:27.063 00:07:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:23:27.063 00:07:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:23:27.063 00:07:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:23:27.063 00:07:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:23:27.063 No valid GPT data, bailing 00:23:27.063 00:07:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:23:27.063 00:07:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:23:27.063 00:07:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:23:27.063 00:07:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:23:27.063 00:07:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:23:27.063 00:07:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:23:27.063 00:07:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:23:27.063 00:07:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:23:27.063 00:07:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:23:27.063 00:07:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:23:27.063 00:07:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:23:27.063 00:07:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:23:27.063 00:07:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:23:27.063 No valid GPT data, bailing 00:23:27.063 00:07:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:23:27.063 00:07:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:23:27.063 00:07:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:23:27.063 00:07:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:23:27.063 00:07:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:23:27.063 00:07:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:23:27.063 00:07:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:23:27.063 00:07:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:23:27.063 00:07:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:23:27.063 00:07:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:23:27.063 00:07:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:23:27.063 00:07:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:23:27.063 00:07:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:23:27.063 No valid GPT data, bailing 00:23:27.063 00:07:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:23:27.323 00:07:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:23:27.323 00:07:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:23:27.323 00:07:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:23:27.323 00:07:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:23:27.323 00:07:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:27.323 00:07:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:27.323 00:07:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:23:27.323 00:07:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:23:27.323 00:07:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:23:27.323 00:07:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:23:27.323 00:07:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:23:27.323 00:07:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:23:27.323 00:07:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:23:27.323 00:07:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:23:27.323 00:07:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:23:27.323 00:07:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:23:27.323 00:07:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --hostid=2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -a 10.0.0.1 -t tcp -s 4420 00:23:27.323 00:23:27.323 Discovery Log Number of Records 2, Generation counter 2 00:23:27.323 =====Discovery Log Entry 0====== 00:23:27.323 trtype: tcp 00:23:27.323 adrfam: ipv4 00:23:27.323 subtype: current discovery subsystem 00:23:27.323 treq: not specified, sq flow control disable supported 00:23:27.323 portid: 1 00:23:27.323 trsvcid: 4420 00:23:27.323 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:23:27.323 traddr: 10.0.0.1 00:23:27.323 eflags: none 00:23:27.323 sectype: none 00:23:27.323 =====Discovery Log Entry 1====== 00:23:27.323 trtype: tcp 00:23:27.323 adrfam: ipv4 00:23:27.323 subtype: nvme subsystem 00:23:27.323 treq: not specified, sq flow control disable supported 00:23:27.323 portid: 1 00:23:27.323 trsvcid: 4420 00:23:27.323 subnqn: nqn.2016-06.io.spdk:testnqn 00:23:27.323 traddr: 10.0.0.1 00:23:27.323 eflags: none 00:23:27.323 sectype: none 00:23:27.323 00:07:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:23:27.323 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:23:27.583 ===================================================== 00:23:27.583 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:23:27.583 ===================================================== 00:23:27.583 Controller Capabilities/Features 00:23:27.583 ================================ 00:23:27.583 Vendor ID: 0000 00:23:27.583 Subsystem Vendor ID: 0000 00:23:27.583 Serial Number: 177b96d4553438494f28 00:23:27.583 Model Number: Linux 00:23:27.583 Firmware Version: 6.8.9-20 00:23:27.583 Recommended Arb Burst: 0 00:23:27.583 IEEE OUI Identifier: 00 00 00 00:23:27.583 Multi-path I/O 00:23:27.583 May have multiple subsystem ports: No 00:23:27.583 May have multiple controllers: No 00:23:27.583 Associated with SR-IOV VF: No 00:23:27.583 Max Data Transfer Size: Unlimited 00:23:27.583 Max Number of Namespaces: 0 00:23:27.583 Max Number of I/O Queues: 1024 00:23:27.583 NVMe Specification Version (VS): 1.3 00:23:27.583 NVMe Specification Version (Identify): 1.3 00:23:27.583 Maximum Queue Entries: 1024 00:23:27.583 Contiguous Queues Required: No 00:23:27.583 Arbitration Mechanisms Supported 00:23:27.583 Weighted Round Robin: Not Supported 00:23:27.583 Vendor Specific: Not Supported 00:23:27.583 Reset Timeout: 7500 ms 00:23:27.583 Doorbell Stride: 4 bytes 00:23:27.583 NVM Subsystem Reset: Not Supported 00:23:27.583 Command Sets Supported 00:23:27.583 NVM Command Set: Supported 00:23:27.584 Boot Partition: Not Supported 00:23:27.584 Memory Page Size Minimum: 4096 bytes 00:23:27.584 Memory Page Size Maximum: 4096 bytes 00:23:27.584 Persistent Memory Region: Not Supported 00:23:27.584 Optional Asynchronous Events Supported 00:23:27.584 Namespace Attribute Notices: Not Supported 00:23:27.584 Firmware Activation Notices: Not Supported 00:23:27.584 ANA Change Notices: Not Supported 00:23:27.584 PLE Aggregate Log Change Notices: Not Supported 00:23:27.584 LBA Status Info Alert Notices: Not Supported 00:23:27.584 EGE Aggregate Log Change Notices: Not Supported 00:23:27.584 Normal NVM Subsystem Shutdown event: Not Supported 00:23:27.584 Zone Descriptor Change Notices: Not Supported 00:23:27.584 Discovery Log Change Notices: Supported 00:23:27.584 Controller Attributes 00:23:27.584 128-bit Host Identifier: Not Supported 00:23:27.584 Non-Operational Permissive Mode: Not Supported 00:23:27.584 NVM Sets: Not Supported 00:23:27.584 Read Recovery Levels: Not Supported 00:23:27.584 Endurance Groups: Not Supported 00:23:27.584 Predictable Latency Mode: Not Supported 00:23:27.584 Traffic Based Keep ALive: Not Supported 00:23:27.584 Namespace Granularity: Not Supported 00:23:27.584 SQ Associations: Not Supported 00:23:27.584 UUID List: Not Supported 00:23:27.584 Multi-Domain Subsystem: Not Supported 00:23:27.584 Fixed Capacity Management: Not Supported 00:23:27.584 Variable Capacity Management: Not Supported 00:23:27.584 Delete Endurance Group: Not Supported 00:23:27.584 Delete NVM Set: Not Supported 00:23:27.584 Extended LBA Formats Supported: Not Supported 00:23:27.584 Flexible Data Placement Supported: Not Supported 00:23:27.584 00:23:27.584 Controller Memory Buffer Support 00:23:27.584 ================================ 00:23:27.584 Supported: No 00:23:27.584 00:23:27.584 Persistent Memory Region Support 00:23:27.584 ================================ 00:23:27.584 Supported: No 00:23:27.584 00:23:27.584 Admin Command Set Attributes 00:23:27.584 ============================ 00:23:27.584 Security Send/Receive: Not Supported 00:23:27.584 Format NVM: Not Supported 00:23:27.584 Firmware Activate/Download: Not Supported 00:23:27.584 Namespace Management: Not Supported 00:23:27.584 Device Self-Test: Not Supported 00:23:27.584 Directives: Not Supported 00:23:27.584 NVMe-MI: Not Supported 00:23:27.584 Virtualization Management: Not Supported 00:23:27.584 Doorbell Buffer Config: Not Supported 00:23:27.584 Get LBA Status Capability: Not Supported 00:23:27.584 Command & Feature Lockdown Capability: Not Supported 00:23:27.584 Abort Command Limit: 1 00:23:27.584 Async Event Request Limit: 1 00:23:27.584 Number of Firmware Slots: N/A 00:23:27.584 Firmware Slot 1 Read-Only: N/A 00:23:27.584 Firmware Activation Without Reset: N/A 00:23:27.584 Multiple Update Detection Support: N/A 00:23:27.584 Firmware Update Granularity: No Information Provided 00:23:27.584 Per-Namespace SMART Log: No 00:23:27.584 Asymmetric Namespace Access Log Page: Not Supported 00:23:27.584 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:23:27.584 Command Effects Log Page: Not Supported 00:23:27.584 Get Log Page Extended Data: Supported 00:23:27.584 Telemetry Log Pages: Not Supported 00:23:27.584 Persistent Event Log Pages: Not Supported 00:23:27.584 Supported Log Pages Log Page: May Support 00:23:27.584 Commands Supported & Effects Log Page: Not Supported 00:23:27.584 Feature Identifiers & Effects Log Page:May Support 00:23:27.584 NVMe-MI Commands & Effects Log Page: May Support 00:23:27.584 Data Area 4 for Telemetry Log: Not Supported 00:23:27.584 Error Log Page Entries Supported: 1 00:23:27.584 Keep Alive: Not Supported 00:23:27.584 00:23:27.584 NVM Command Set Attributes 00:23:27.584 ========================== 00:23:27.584 Submission Queue Entry Size 00:23:27.584 Max: 1 00:23:27.584 Min: 1 00:23:27.584 Completion Queue Entry Size 00:23:27.584 Max: 1 00:23:27.584 Min: 1 00:23:27.584 Number of Namespaces: 0 00:23:27.584 Compare Command: Not Supported 00:23:27.584 Write Uncorrectable Command: Not Supported 00:23:27.584 Dataset Management Command: Not Supported 00:23:27.584 Write Zeroes Command: Not Supported 00:23:27.584 Set Features Save Field: Not Supported 00:23:27.584 Reservations: Not Supported 00:23:27.584 Timestamp: Not Supported 00:23:27.584 Copy: Not Supported 00:23:27.584 Volatile Write Cache: Not Present 00:23:27.584 Atomic Write Unit (Normal): 1 00:23:27.584 Atomic Write Unit (PFail): 1 00:23:27.584 Atomic Compare & Write Unit: 1 00:23:27.584 Fused Compare & Write: Not Supported 00:23:27.584 Scatter-Gather List 00:23:27.584 SGL Command Set: Supported 00:23:27.584 SGL Keyed: Not Supported 00:23:27.584 SGL Bit Bucket Descriptor: Not Supported 00:23:27.584 SGL Metadata Pointer: Not Supported 00:23:27.584 Oversized SGL: Not Supported 00:23:27.584 SGL Metadata Address: Not Supported 00:23:27.584 SGL Offset: Supported 00:23:27.584 Transport SGL Data Block: Not Supported 00:23:27.584 Replay Protected Memory Block: Not Supported 00:23:27.584 00:23:27.584 Firmware Slot Information 00:23:27.584 ========================= 00:23:27.584 Active slot: 0 00:23:27.584 00:23:27.584 00:23:27.584 Error Log 00:23:27.584 ========= 00:23:27.584 00:23:27.584 Active Namespaces 00:23:27.584 ================= 00:23:27.584 Discovery Log Page 00:23:27.584 ================== 00:23:27.584 Generation Counter: 2 00:23:27.584 Number of Records: 2 00:23:27.584 Record Format: 0 00:23:27.584 00:23:27.584 Discovery Log Entry 0 00:23:27.584 ---------------------- 00:23:27.584 Transport Type: 3 (TCP) 00:23:27.584 Address Family: 1 (IPv4) 00:23:27.584 Subsystem Type: 3 (Current Discovery Subsystem) 00:23:27.584 Entry Flags: 00:23:27.584 Duplicate Returned Information: 0 00:23:27.584 Explicit Persistent Connection Support for Discovery: 0 00:23:27.584 Transport Requirements: 00:23:27.584 Secure Channel: Not Specified 00:23:27.584 Port ID: 1 (0x0001) 00:23:27.584 Controller ID: 65535 (0xffff) 00:23:27.584 Admin Max SQ Size: 32 00:23:27.584 Transport Service Identifier: 4420 00:23:27.584 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:23:27.584 Transport Address: 10.0.0.1 00:23:27.584 Discovery Log Entry 1 00:23:27.584 ---------------------- 00:23:27.584 Transport Type: 3 (TCP) 00:23:27.584 Address Family: 1 (IPv4) 00:23:27.584 Subsystem Type: 2 (NVM Subsystem) 00:23:27.584 Entry Flags: 00:23:27.584 Duplicate Returned Information: 0 00:23:27.584 Explicit Persistent Connection Support for Discovery: 0 00:23:27.584 Transport Requirements: 00:23:27.584 Secure Channel: Not Specified 00:23:27.584 Port ID: 1 (0x0001) 00:23:27.584 Controller ID: 65535 (0xffff) 00:23:27.584 Admin Max SQ Size: 32 00:23:27.584 Transport Service Identifier: 4420 00:23:27.584 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:23:27.584 Transport Address: 10.0.0.1 00:23:27.584 00:07:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:27.844 get_feature(0x01) failed 00:23:27.844 get_feature(0x02) failed 00:23:27.844 get_feature(0x04) failed 00:23:27.844 ===================================================== 00:23:27.844 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:23:27.844 ===================================================== 00:23:27.844 Controller Capabilities/Features 00:23:27.844 ================================ 00:23:27.844 Vendor ID: 0000 00:23:27.844 Subsystem Vendor ID: 0000 00:23:27.844 Serial Number: 9754d6999fbb5ce12054 00:23:27.844 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:23:27.844 Firmware Version: 6.8.9-20 00:23:27.844 Recommended Arb Burst: 6 00:23:27.844 IEEE OUI Identifier: 00 00 00 00:23:27.844 Multi-path I/O 00:23:27.844 May have multiple subsystem ports: Yes 00:23:27.844 May have multiple controllers: Yes 00:23:27.844 Associated with SR-IOV VF: No 00:23:27.844 Max Data Transfer Size: Unlimited 00:23:27.844 Max Number of Namespaces: 1024 00:23:27.844 Max Number of I/O Queues: 128 00:23:27.844 NVMe Specification Version (VS): 1.3 00:23:27.845 NVMe Specification Version (Identify): 1.3 00:23:27.845 Maximum Queue Entries: 1024 00:23:27.845 Contiguous Queues Required: No 00:23:27.845 Arbitration Mechanisms Supported 00:23:27.845 Weighted Round Robin: Not Supported 00:23:27.845 Vendor Specific: Not Supported 00:23:27.845 Reset Timeout: 7500 ms 00:23:27.845 Doorbell Stride: 4 bytes 00:23:27.845 NVM Subsystem Reset: Not Supported 00:23:27.845 Command Sets Supported 00:23:27.845 NVM Command Set: Supported 00:23:27.845 Boot Partition: Not Supported 00:23:27.845 Memory Page Size Minimum: 4096 bytes 00:23:27.845 Memory Page Size Maximum: 4096 bytes 00:23:27.845 Persistent Memory Region: Not Supported 00:23:27.845 Optional Asynchronous Events Supported 00:23:27.845 Namespace Attribute Notices: Supported 00:23:27.845 Firmware Activation Notices: Not Supported 00:23:27.845 ANA Change Notices: Supported 00:23:27.845 PLE Aggregate Log Change Notices: Not Supported 00:23:27.845 LBA Status Info Alert Notices: Not Supported 00:23:27.845 EGE Aggregate Log Change Notices: Not Supported 00:23:27.845 Normal NVM Subsystem Shutdown event: Not Supported 00:23:27.845 Zone Descriptor Change Notices: Not Supported 00:23:27.845 Discovery Log Change Notices: Not Supported 00:23:27.845 Controller Attributes 00:23:27.845 128-bit Host Identifier: Supported 00:23:27.845 Non-Operational Permissive Mode: Not Supported 00:23:27.845 NVM Sets: Not Supported 00:23:27.845 Read Recovery Levels: Not Supported 00:23:27.845 Endurance Groups: Not Supported 00:23:27.845 Predictable Latency Mode: Not Supported 00:23:27.845 Traffic Based Keep ALive: Supported 00:23:27.845 Namespace Granularity: Not Supported 00:23:27.845 SQ Associations: Not Supported 00:23:27.845 UUID List: Not Supported 00:23:27.845 Multi-Domain Subsystem: Not Supported 00:23:27.845 Fixed Capacity Management: Not Supported 00:23:27.845 Variable Capacity Management: Not Supported 00:23:27.845 Delete Endurance Group: Not Supported 00:23:27.845 Delete NVM Set: Not Supported 00:23:27.845 Extended LBA Formats Supported: Not Supported 00:23:27.845 Flexible Data Placement Supported: Not Supported 00:23:27.845 00:23:27.845 Controller Memory Buffer Support 00:23:27.845 ================================ 00:23:27.845 Supported: No 00:23:27.845 00:23:27.845 Persistent Memory Region Support 00:23:27.845 ================================ 00:23:27.845 Supported: No 00:23:27.845 00:23:27.845 Admin Command Set Attributes 00:23:27.845 ============================ 00:23:27.845 Security Send/Receive: Not Supported 00:23:27.845 Format NVM: Not Supported 00:23:27.845 Firmware Activate/Download: Not Supported 00:23:27.845 Namespace Management: Not Supported 00:23:27.845 Device Self-Test: Not Supported 00:23:27.845 Directives: Not Supported 00:23:27.845 NVMe-MI: Not Supported 00:23:27.845 Virtualization Management: Not Supported 00:23:27.845 Doorbell Buffer Config: Not Supported 00:23:27.845 Get LBA Status Capability: Not Supported 00:23:27.845 Command & Feature Lockdown Capability: Not Supported 00:23:27.845 Abort Command Limit: 4 00:23:27.845 Async Event Request Limit: 4 00:23:27.845 Number of Firmware Slots: N/A 00:23:27.845 Firmware Slot 1 Read-Only: N/A 00:23:27.845 Firmware Activation Without Reset: N/A 00:23:27.845 Multiple Update Detection Support: N/A 00:23:27.845 Firmware Update Granularity: No Information Provided 00:23:27.845 Per-Namespace SMART Log: Yes 00:23:27.845 Asymmetric Namespace Access Log Page: Supported 00:23:27.845 ANA Transition Time : 10 sec 00:23:27.845 00:23:27.845 Asymmetric Namespace Access Capabilities 00:23:27.845 ANA Optimized State : Supported 00:23:27.845 ANA Non-Optimized State : Supported 00:23:27.845 ANA Inaccessible State : Supported 00:23:27.845 ANA Persistent Loss State : Supported 00:23:27.845 ANA Change State : Supported 00:23:27.845 ANAGRPID is not changed : No 00:23:27.845 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:23:27.845 00:23:27.845 ANA Group Identifier Maximum : 128 00:23:27.845 Number of ANA Group Identifiers : 128 00:23:27.845 Max Number of Allowed Namespaces : 1024 00:23:27.845 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:23:27.845 Command Effects Log Page: Supported 00:23:27.845 Get Log Page Extended Data: Supported 00:23:27.845 Telemetry Log Pages: Not Supported 00:23:27.845 Persistent Event Log Pages: Not Supported 00:23:27.845 Supported Log Pages Log Page: May Support 00:23:27.845 Commands Supported & Effects Log Page: Not Supported 00:23:27.845 Feature Identifiers & Effects Log Page:May Support 00:23:27.845 NVMe-MI Commands & Effects Log Page: May Support 00:23:27.845 Data Area 4 for Telemetry Log: Not Supported 00:23:27.845 Error Log Page Entries Supported: 128 00:23:27.845 Keep Alive: Supported 00:23:27.845 Keep Alive Granularity: 1000 ms 00:23:27.845 00:23:27.845 NVM Command Set Attributes 00:23:27.845 ========================== 00:23:27.845 Submission Queue Entry Size 00:23:27.845 Max: 64 00:23:27.845 Min: 64 00:23:27.845 Completion Queue Entry Size 00:23:27.845 Max: 16 00:23:27.845 Min: 16 00:23:27.845 Number of Namespaces: 1024 00:23:27.845 Compare Command: Not Supported 00:23:27.845 Write Uncorrectable Command: Not Supported 00:23:27.845 Dataset Management Command: Supported 00:23:27.845 Write Zeroes Command: Supported 00:23:27.845 Set Features Save Field: Not Supported 00:23:27.845 Reservations: Not Supported 00:23:27.845 Timestamp: Not Supported 00:23:27.845 Copy: Not Supported 00:23:27.845 Volatile Write Cache: Present 00:23:27.845 Atomic Write Unit (Normal): 1 00:23:27.845 Atomic Write Unit (PFail): 1 00:23:27.845 Atomic Compare & Write Unit: 1 00:23:27.845 Fused Compare & Write: Not Supported 00:23:27.845 Scatter-Gather List 00:23:27.845 SGL Command Set: Supported 00:23:27.845 SGL Keyed: Not Supported 00:23:27.845 SGL Bit Bucket Descriptor: Not Supported 00:23:27.845 SGL Metadata Pointer: Not Supported 00:23:27.845 Oversized SGL: Not Supported 00:23:27.845 SGL Metadata Address: Not Supported 00:23:27.845 SGL Offset: Supported 00:23:27.845 Transport SGL Data Block: Not Supported 00:23:27.845 Replay Protected Memory Block: Not Supported 00:23:27.845 00:23:27.845 Firmware Slot Information 00:23:27.845 ========================= 00:23:27.845 Active slot: 0 00:23:27.845 00:23:27.845 Asymmetric Namespace Access 00:23:27.845 =========================== 00:23:27.845 Change Count : 0 00:23:27.845 Number of ANA Group Descriptors : 1 00:23:27.845 ANA Group Descriptor : 0 00:23:27.845 ANA Group ID : 1 00:23:27.845 Number of NSID Values : 1 00:23:27.845 Change Count : 0 00:23:27.845 ANA State : 1 00:23:27.845 Namespace Identifier : 1 00:23:27.845 00:23:27.845 Commands Supported and Effects 00:23:27.845 ============================== 00:23:27.845 Admin Commands 00:23:27.845 -------------- 00:23:27.845 Get Log Page (02h): Supported 00:23:27.845 Identify (06h): Supported 00:23:27.845 Abort (08h): Supported 00:23:27.845 Set Features (09h): Supported 00:23:27.845 Get Features (0Ah): Supported 00:23:27.845 Asynchronous Event Request (0Ch): Supported 00:23:27.845 Keep Alive (18h): Supported 00:23:27.845 I/O Commands 00:23:27.845 ------------ 00:23:27.845 Flush (00h): Supported 00:23:27.845 Write (01h): Supported LBA-Change 00:23:27.845 Read (02h): Supported 00:23:27.845 Write Zeroes (08h): Supported LBA-Change 00:23:27.845 Dataset Management (09h): Supported 00:23:27.845 00:23:27.845 Error Log 00:23:27.845 ========= 00:23:27.845 Entry: 0 00:23:27.845 Error Count: 0x3 00:23:27.845 Submission Queue Id: 0x0 00:23:27.845 Command Id: 0x5 00:23:27.845 Phase Bit: 0 00:23:27.845 Status Code: 0x2 00:23:27.845 Status Code Type: 0x0 00:23:27.845 Do Not Retry: 1 00:23:27.845 Error Location: 0x28 00:23:27.845 LBA: 0x0 00:23:27.845 Namespace: 0x0 00:23:27.845 Vendor Log Page: 0x0 00:23:27.845 ----------- 00:23:27.845 Entry: 1 00:23:27.845 Error Count: 0x2 00:23:27.845 Submission Queue Id: 0x0 00:23:27.845 Command Id: 0x5 00:23:27.845 Phase Bit: 0 00:23:27.845 Status Code: 0x2 00:23:27.845 Status Code Type: 0x0 00:23:27.845 Do Not Retry: 1 00:23:27.845 Error Location: 0x28 00:23:27.845 LBA: 0x0 00:23:27.845 Namespace: 0x0 00:23:27.845 Vendor Log Page: 0x0 00:23:27.845 ----------- 00:23:27.845 Entry: 2 00:23:27.845 Error Count: 0x1 00:23:27.845 Submission Queue Id: 0x0 00:23:27.845 Command Id: 0x4 00:23:27.845 Phase Bit: 0 00:23:27.846 Status Code: 0x2 00:23:27.846 Status Code Type: 0x0 00:23:27.846 Do Not Retry: 1 00:23:27.846 Error Location: 0x28 00:23:27.846 LBA: 0x0 00:23:27.846 Namespace: 0x0 00:23:27.846 Vendor Log Page: 0x0 00:23:27.846 00:23:27.846 Number of Queues 00:23:27.846 ================ 00:23:27.846 Number of I/O Submission Queues: 128 00:23:27.846 Number of I/O Completion Queues: 128 00:23:27.846 00:23:27.846 ZNS Specific Controller Data 00:23:27.846 ============================ 00:23:27.846 Zone Append Size Limit: 0 00:23:27.846 00:23:27.846 00:23:27.846 Active Namespaces 00:23:27.846 ================= 00:23:27.846 get_feature(0x05) failed 00:23:27.846 Namespace ID:1 00:23:27.846 Command Set Identifier: NVM (00h) 00:23:27.846 Deallocate: Supported 00:23:27.846 Deallocated/Unwritten Error: Not Supported 00:23:27.846 Deallocated Read Value: Unknown 00:23:27.846 Deallocate in Write Zeroes: Not Supported 00:23:27.846 Deallocated Guard Field: 0xFFFF 00:23:27.846 Flush: Supported 00:23:27.846 Reservation: Not Supported 00:23:27.846 Namespace Sharing Capabilities: Multiple Controllers 00:23:27.846 Size (in LBAs): 1310720 (5GiB) 00:23:27.846 Capacity (in LBAs): 1310720 (5GiB) 00:23:27.846 Utilization (in LBAs): 1310720 (5GiB) 00:23:27.846 UUID: a3af063f-4f6f-4c63-aa1c-cc3fc6c22c4a 00:23:27.846 Thin Provisioning: Not Supported 00:23:27.846 Per-NS Atomic Units: Yes 00:23:27.846 Atomic Boundary Size (Normal): 0 00:23:27.846 Atomic Boundary Size (PFail): 0 00:23:27.846 Atomic Boundary Offset: 0 00:23:27.846 NGUID/EUI64 Never Reused: No 00:23:27.846 ANA group ID: 1 00:23:27.846 Namespace Write Protected: No 00:23:27.846 Number of LBA Formats: 1 00:23:27.846 Current LBA Format: LBA Format #00 00:23:27.846 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:23:27.846 00:23:27.846 00:07:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:23:27.846 00:07:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:27.846 00:07:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:23:27.846 00:07:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:27.846 00:07:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:23:27.846 00:07:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:27.846 00:07:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:27.846 rmmod nvme_tcp 00:23:27.846 rmmod nvme_fabrics 00:23:27.846 00:07:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:27.846 00:07:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:23:27.846 00:07:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:23:27.846 00:07:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:23:27.846 00:07:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:27.846 00:07:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:27.846 00:07:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:27.846 00:07:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:23:27.846 00:07:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:23:27.846 00:07:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:23:27.846 00:07:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:27.846 00:07:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:27.846 00:07:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:23:27.846 00:07:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:23:27.846 00:07:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:23:27.846 00:07:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:23:28.106 00:07:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:23:28.106 00:07:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:23:28.106 00:07:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:23:28.106 00:07:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:23:28.106 00:07:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:23:28.106 00:07:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:23:28.106 00:07:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:23:28.106 00:07:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:23:28.106 00:07:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:28.106 00:07:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:28.106 00:07:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:23:28.106 00:07:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:28.106 00:07:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:28.106 00:07:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:28.106 00:07:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@300 -- # return 0 00:23:28.106 00:07:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:23:28.106 00:07:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:23:28.106 00:07:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:23:28.106 00:07:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:28.106 00:07:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:28.106 00:07:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:23:28.106 00:07:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:28.106 00:07:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:23:28.106 00:07:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:23:28.366 00:07:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:23:28.934 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:28.934 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:23:28.934 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:23:29.193 00:23:29.193 real 0m3.486s 00:23:29.193 user 0m1.247s 00:23:29.193 sys 0m1.574s 00:23:29.193 00:07:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:29.193 ************************************ 00:23:29.193 END TEST nvmf_identify_kernel_target 00:23:29.193 ************************************ 00:23:29.193 00:07:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:23:29.193 00:07:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:23:29.193 00:07:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:29.193 00:07:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:29.193 00:07:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.193 ************************************ 00:23:29.193 START TEST nvmf_auth_host 00:23:29.193 ************************************ 00:23:29.193 00:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:23:29.193 * Looking for test storage... 00:23:29.193 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:29.193 00:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:29.193 00:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:23:29.193 00:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:29.453 00:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:29.453 00:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:29.453 00:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:29.453 00:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:29.453 00:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:23:29.453 00:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:23:29.453 00:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:23:29.453 00:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:23:29.453 00:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:23:29.453 00:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:23:29.453 00:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:23:29.453 00:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:29.453 00:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:23:29.453 00:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:23:29.453 00:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:29.453 00:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:29.453 00:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:23:29.453 00:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:23:29.453 00:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:29.453 00:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:23:29.453 00:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:23:29.453 00:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:23:29.453 00:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:23:29.453 00:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:29.453 00:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:23:29.453 00:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:23:29.453 00:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:29.453 00:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:29.453 00:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:23:29.453 00:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:29.453 00:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:29.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:29.453 --rc genhtml_branch_coverage=1 00:23:29.453 --rc genhtml_function_coverage=1 00:23:29.453 --rc genhtml_legend=1 00:23:29.453 --rc geninfo_all_blocks=1 00:23:29.453 --rc geninfo_unexecuted_blocks=1 00:23:29.453 00:23:29.453 ' 00:23:29.453 00:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:29.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:29.453 --rc genhtml_branch_coverage=1 00:23:29.453 --rc genhtml_function_coverage=1 00:23:29.453 --rc genhtml_legend=1 00:23:29.453 --rc geninfo_all_blocks=1 00:23:29.453 --rc geninfo_unexecuted_blocks=1 00:23:29.453 00:23:29.453 ' 00:23:29.453 00:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:29.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:29.453 --rc genhtml_branch_coverage=1 00:23:29.453 --rc genhtml_function_coverage=1 00:23:29.453 --rc genhtml_legend=1 00:23:29.453 --rc geninfo_all_blocks=1 00:23:29.453 --rc geninfo_unexecuted_blocks=1 00:23:29.453 00:23:29.453 ' 00:23:29.453 00:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:29.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:29.453 --rc genhtml_branch_coverage=1 00:23:29.453 --rc genhtml_function_coverage=1 00:23:29.453 --rc genhtml_legend=1 00:23:29.453 --rc geninfo_all_blocks=1 00:23:29.453 --rc geninfo_unexecuted_blocks=1 00:23:29.453 00:23:29.453 ' 00:23:29.453 00:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:29.453 00:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:23:29.453 00:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:29.453 00:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:29.453 00:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:29.453 00:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:29.453 00:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:29.453 00:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:29.453 00:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:29.453 00:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:29.453 00:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:29.453 00:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:29.453 00:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:23:29.453 00:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:23:29.453 00:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:29.453 00:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:29.453 00:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:29.453 00:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:29.453 00:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:29.453 00:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:23:29.453 00:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:29.453 00:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:29.453 00:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:29.453 00:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.453 00:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.453 00:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.453 00:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:23:29.453 00:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.453 00:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:23:29.453 00:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:29.453 00:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:29.453 00:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:29.453 00:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:29.454 00:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:29.454 00:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:29.454 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:29.454 00:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:29.454 00:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:29.454 00:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:29.454 00:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:23:29.454 00:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:23:29.454 00:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:23:29.454 00:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:23:29.454 00:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:29.454 00:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:23:29.454 00:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:23:29.454 00:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:23:29.454 00:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:23:29.454 00:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:29.454 00:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:29.454 00:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:29.454 00:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:29.454 00:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:29.454 00:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:29.454 00:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:29.454 00:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:29.454 00:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:23:29.454 00:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:23:29.454 00:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:23:29.454 00:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:23:29.454 00:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:23:29.454 00:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:23:29.454 00:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:29.454 00:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:23:29.454 00:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:23:29.454 00:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:23:29.454 00:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:29.454 00:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:23:29.454 00:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:29.454 00:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:23:29.454 00:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:29.454 00:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:23:29.454 00:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:29.454 00:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:29.454 00:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:29.454 00:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:29.454 00:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:29.454 00:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:29.454 00:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:23:29.454 Cannot find device "nvmf_init_br" 00:23:29.454 00:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:23:29.454 00:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:23:29.454 Cannot find device "nvmf_init_br2" 00:23:29.454 00:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:23:29.454 00:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:23:29.454 Cannot find device "nvmf_tgt_br" 00:23:29.454 00:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # true 00:23:29.454 00:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:23:29.454 Cannot find device "nvmf_tgt_br2" 00:23:29.454 00:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # true 00:23:29.454 00:07:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:23:29.454 Cannot find device "nvmf_init_br" 00:23:29.454 00:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # true 00:23:29.454 00:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:23:29.454 Cannot find device "nvmf_init_br2" 00:23:29.454 00:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # true 00:23:29.454 00:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:23:29.454 Cannot find device "nvmf_tgt_br" 00:23:29.454 00:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # true 00:23:29.454 00:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:23:29.454 Cannot find device "nvmf_tgt_br2" 00:23:29.454 00:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # true 00:23:29.454 00:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:23:29.454 Cannot find device "nvmf_br" 00:23:29.454 00:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # true 00:23:29.454 00:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:23:29.454 Cannot find device "nvmf_init_if" 00:23:29.454 00:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # true 00:23:29.454 00:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:23:29.454 Cannot find device "nvmf_init_if2" 00:23:29.454 00:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # true 00:23:29.454 00:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:29.454 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:29.454 00:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # true 00:23:29.454 00:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:29.454 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:29.454 00:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # true 00:23:29.454 00:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:23:29.454 00:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:29.454 00:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:23:29.454 00:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:29.714 00:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:29.714 00:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:29.714 00:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:29.714 00:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:29.714 00:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:23:29.714 00:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:23:29.714 00:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:23:29.714 00:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:23:29.714 00:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:23:29.714 00:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:23:29.714 00:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:23:29.714 00:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:23:29.714 00:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:23:29.714 00:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:29.714 00:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:29.714 00:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:29.714 00:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:23:29.714 00:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:23:29.714 00:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:23:29.714 00:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:23:29.714 00:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:29.714 00:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:29.714 00:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:29.714 00:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:23:29.714 00:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:23:29.714 00:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:23:29.714 00:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:29.714 00:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:23:29.714 00:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:23:29.714 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:29.714 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.093 ms 00:23:29.714 00:23:29.714 --- 10.0.0.3 ping statistics --- 00:23:29.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:29.714 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:23:29.714 00:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:23:29.715 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:23:29.715 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.056 ms 00:23:29.715 00:23:29.715 --- 10.0.0.4 ping statistics --- 00:23:29.715 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:29.715 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:23:29.715 00:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:29.715 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:29.715 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.055 ms 00:23:29.715 00:23:29.715 --- 10.0.0.1 ping statistics --- 00:23:29.715 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:29.715 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:23:29.715 00:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:23:29.715 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:29.715 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:23:29.715 00:23:29.715 --- 10.0.0.2 ping statistics --- 00:23:29.715 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:29.715 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:23:29.715 00:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:29.715 00:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@461 -- # return 0 00:23:29.715 00:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:29.715 00:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:29.715 00:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:29.715 00:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:29.715 00:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:29.715 00:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:29.715 00:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:29.715 00:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:23:29.715 00:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:29.715 00:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:29.715 00:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.715 00:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=84359 00:23:29.715 00:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:23:29.715 00:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 84359 00:23:29.715 00:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 84359 ']' 00:23:29.715 00:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:29.715 00:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:29.715 00:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:29.715 00:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:29.715 00:07:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.093 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:31.093 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:23:31.093 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:31.093 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:31.093 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.093 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:31.093 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:23:31.093 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:23:31.093 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:23:31.093 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:31.093 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:23:31.093 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:23:31.093 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:23:31.093 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:31.093 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=af734c6c70ea8d74b5ba976a21d49162 00:23:31.093 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:23:31.093 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.ekb 00:23:31.093 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key af734c6c70ea8d74b5ba976a21d49162 0 00:23:31.093 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 af734c6c70ea8d74b5ba976a21d49162 0 00:23:31.093 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:23:31.093 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:23:31.093 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=af734c6c70ea8d74b5ba976a21d49162 00:23:31.093 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:23:31.093 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:23:31.093 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.ekb 00:23:31.093 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.ekb 00:23:31.093 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.ekb 00:23:31.093 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:23:31.093 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:23:31.093 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:31.093 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:23:31.093 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:23:31.093 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:23:31.093 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:23:31.093 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=321f41b9f29c27306c9e0ff91fff2d655ea3f499642666a59507a14f8ce8ec00 00:23:31.093 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:23:31.093 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.swp 00:23:31.093 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 321f41b9f29c27306c9e0ff91fff2d655ea3f499642666a59507a14f8ce8ec00 3 00:23:31.093 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 321f41b9f29c27306c9e0ff91fff2d655ea3f499642666a59507a14f8ce8ec00 3 00:23:31.093 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:23:31.093 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:23:31.093 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=321f41b9f29c27306c9e0ff91fff2d655ea3f499642666a59507a14f8ce8ec00 00:23:31.093 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:23:31.093 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:23:31.093 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.swp 00:23:31.093 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.swp 00:23:31.093 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.swp 00:23:31.093 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:23:31.093 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:23:31.093 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:31.093 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:23:31.093 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:23:31.093 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:23:31.093 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:23:31.093 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=a5a03bac6672c2b1ae4d8ed5fe43a115cc1ab80d93c64ab4 00:23:31.093 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:23:31.093 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.FFA 00:23:31.093 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key a5a03bac6672c2b1ae4d8ed5fe43a115cc1ab80d93c64ab4 0 00:23:31.093 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 a5a03bac6672c2b1ae4d8ed5fe43a115cc1ab80d93c64ab4 0 00:23:31.093 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:23:31.093 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:23:31.093 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=a5a03bac6672c2b1ae4d8ed5fe43a115cc1ab80d93c64ab4 00:23:31.093 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:23:31.093 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:23:31.093 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.FFA 00:23:31.093 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.FFA 00:23:31.093 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.FFA 00:23:31.093 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:23:31.093 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:23:31.093 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:31.093 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:23:31.093 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:23:31.093 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:23:31.093 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:23:31.093 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=abb0c627d844c73028c0cc2a77f53c6d0ceac1b57b84f8f3 00:23:31.093 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:23:31.093 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.p4k 00:23:31.093 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key abb0c627d844c73028c0cc2a77f53c6d0ceac1b57b84f8f3 2 00:23:31.093 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 abb0c627d844c73028c0cc2a77f53c6d0ceac1b57b84f8f3 2 00:23:31.093 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:23:31.093 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:23:31.093 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=abb0c627d844c73028c0cc2a77f53c6d0ceac1b57b84f8f3 00:23:31.093 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:23:31.093 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:23:31.093 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.p4k 00:23:31.093 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.p4k 00:23:31.093 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.p4k 00:23:31.093 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:23:31.094 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:23:31.094 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:31.094 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:23:31.094 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:23:31.094 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:23:31.094 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:31.094 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=42cd01fcf8558105b9c3b81dbb9ca9d9 00:23:31.094 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:23:31.094 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.1sj 00:23:31.094 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 42cd01fcf8558105b9c3b81dbb9ca9d9 1 00:23:31.094 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 42cd01fcf8558105b9c3b81dbb9ca9d9 1 00:23:31.094 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:23:31.094 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:23:31.094 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=42cd01fcf8558105b9c3b81dbb9ca9d9 00:23:31.094 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:23:31.094 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:23:31.353 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.1sj 00:23:31.353 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.1sj 00:23:31.353 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.1sj 00:23:31.353 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:23:31.353 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:23:31.353 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:31.353 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:23:31.353 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:23:31.353 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:23:31.353 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:31.353 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=09e0f26ea09b283d355c757264795c15 00:23:31.353 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:23:31.353 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.MfY 00:23:31.353 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 09e0f26ea09b283d355c757264795c15 1 00:23:31.353 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 09e0f26ea09b283d355c757264795c15 1 00:23:31.353 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:23:31.353 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:23:31.353 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=09e0f26ea09b283d355c757264795c15 00:23:31.353 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:23:31.353 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:23:31.354 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.MfY 00:23:31.354 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.MfY 00:23:31.354 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.MfY 00:23:31.354 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:23:31.354 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:23:31.354 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:31.354 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:23:31.354 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:23:31.354 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:23:31.354 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:23:31.354 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=7c51007f4c7a0d1a2488818786e48fb81bf61b9dea9403fe 00:23:31.354 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:23:31.354 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.eUc 00:23:31.354 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 7c51007f4c7a0d1a2488818786e48fb81bf61b9dea9403fe 2 00:23:31.354 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 7c51007f4c7a0d1a2488818786e48fb81bf61b9dea9403fe 2 00:23:31.354 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:23:31.354 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:23:31.354 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=7c51007f4c7a0d1a2488818786e48fb81bf61b9dea9403fe 00:23:31.354 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:23:31.354 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:23:31.354 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.eUc 00:23:31.354 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.eUc 00:23:31.354 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.eUc 00:23:31.354 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:23:31.354 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:23:31.354 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:31.354 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:23:31.354 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:23:31.354 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:23:31.354 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:31.354 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=1266a7f3c3226ae8e6628c70c06ecd5d 00:23:31.354 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:23:31.354 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.ITB 00:23:31.354 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 1266a7f3c3226ae8e6628c70c06ecd5d 0 00:23:31.354 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 1266a7f3c3226ae8e6628c70c06ecd5d 0 00:23:31.354 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:23:31.354 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:23:31.354 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=1266a7f3c3226ae8e6628c70c06ecd5d 00:23:31.354 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:23:31.354 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:23:31.354 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.ITB 00:23:31.354 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.ITB 00:23:31.354 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.ITB 00:23:31.354 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:23:31.354 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:23:31.354 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:31.354 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:23:31.354 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:23:31.354 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:23:31.354 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:23:31.354 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=0c307130a8a868935fd2c747332bf54c2c121177de69c8cdad30f73f3200cb81 00:23:31.354 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:23:31.354 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.GxZ 00:23:31.354 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 0c307130a8a868935fd2c747332bf54c2c121177de69c8cdad30f73f3200cb81 3 00:23:31.354 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 0c307130a8a868935fd2c747332bf54c2c121177de69c8cdad30f73f3200cb81 3 00:23:31.354 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:23:31.354 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:23:31.354 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=0c307130a8a868935fd2c747332bf54c2c121177de69c8cdad30f73f3200cb81 00:23:31.354 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:23:31.354 00:07:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:23:31.354 00:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.GxZ 00:23:31.354 00:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.GxZ 00:23:31.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:31.613 00:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.GxZ 00:23:31.613 00:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:23:31.613 00:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 84359 00:23:31.613 00:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 84359 ']' 00:23:31.613 00:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:31.613 00:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:31.613 00:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:31.613 00:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:31.613 00:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.872 00:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:31.872 00:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:23:31.872 00:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:31.872 00:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.ekb 00:23:31.872 00:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.872 00:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.872 00:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.872 00:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.swp ]] 00:23:31.872 00:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.swp 00:23:31.872 00:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.872 00:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.872 00:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.872 00:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:31.872 00:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.FFA 00:23:31.872 00:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.872 00:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.872 00:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.872 00:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.p4k ]] 00:23:31.872 00:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.p4k 00:23:31.872 00:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.872 00:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.872 00:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.872 00:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:31.872 00:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.1sj 00:23:31.872 00:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.872 00:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.873 00:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.873 00:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.MfY ]] 00:23:31.873 00:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.MfY 00:23:31.873 00:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.873 00:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.873 00:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.873 00:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:31.873 00:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.eUc 00:23:31.873 00:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.873 00:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.873 00:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.873 00:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.ITB ]] 00:23:31.873 00:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.ITB 00:23:31.873 00:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.873 00:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.873 00:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.873 00:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:31.873 00:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.GxZ 00:23:31.873 00:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.873 00:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.873 00:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.873 00:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:23:31.873 00:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:23:31.873 00:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:23:31.873 00:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:31.873 00:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:31.873 00:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:31.873 00:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:31.873 00:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:31.873 00:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:31.873 00:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:31.873 00:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:31.873 00:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:31.873 00:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:31.873 00:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:23:31.873 00:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:23:31.873 00:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:23:31.873 00:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:31.873 00:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:23:31.873 00:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:23:31.873 00:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:23:31.873 00:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:23:31.873 00:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:23:31.873 00:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:23:31.873 00:07:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:23:32.132 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:32.390 Waiting for block devices as requested 00:23:32.390 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:23:32.390 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:23:32.968 00:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:23:32.968 00:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:23:32.968 00:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:23:32.968 00:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:23:32.968 00:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:23:32.968 00:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:23:32.968 00:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:23:32.968 00:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:23:32.968 00:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:23:32.968 No valid GPT data, bailing 00:23:32.968 00:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:23:32.968 00:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:23:32.968 00:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:23:32.968 00:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:23:32.968 00:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:23:32.968 00:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:23:32.968 00:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:23:32.968 00:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:23:32.968 00:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:23:32.968 00:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:23:32.968 00:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:23:32.968 00:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:23:32.968 00:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:23:32.968 No valid GPT data, bailing 00:23:32.968 00:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:23:32.968 00:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:23:32.968 00:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:23:32.968 00:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:23:32.968 00:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:23:32.968 00:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:23:32.968 00:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:23:32.968 00:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:23:32.968 00:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:23:32.968 00:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:23:32.968 00:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:23:32.968 00:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:23:32.968 00:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:23:33.226 No valid GPT data, bailing 00:23:33.226 00:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:23:33.226 00:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:23:33.226 00:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:23:33.226 00:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:23:33.226 00:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:23:33.226 00:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:23:33.226 00:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:23:33.226 00:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:23:33.226 00:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:23:33.226 00:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:23:33.226 00:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:23:33.226 00:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:23:33.226 00:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:23:33.226 No valid GPT data, bailing 00:23:33.226 00:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:23:33.226 00:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:23:33.226 00:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:23:33.226 00:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:23:33.226 00:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:23:33.226 00:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:33.226 00:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:23:33.226 00:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:23:33.226 00:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:23:33.226 00:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:23:33.226 00:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:23:33.226 00:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:23:33.226 00:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:23:33.226 00:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:23:33.226 00:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:23:33.226 00:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:23:33.226 00:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:23:33.226 00:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --hostid=2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -a 10.0.0.1 -t tcp -s 4420 00:23:33.226 00:23:33.226 Discovery Log Number of Records 2, Generation counter 2 00:23:33.226 =====Discovery Log Entry 0====== 00:23:33.226 trtype: tcp 00:23:33.226 adrfam: ipv4 00:23:33.226 subtype: current discovery subsystem 00:23:33.226 treq: not specified, sq flow control disable supported 00:23:33.226 portid: 1 00:23:33.226 trsvcid: 4420 00:23:33.226 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:23:33.226 traddr: 10.0.0.1 00:23:33.226 eflags: none 00:23:33.226 sectype: none 00:23:33.226 =====Discovery Log Entry 1====== 00:23:33.226 trtype: tcp 00:23:33.226 adrfam: ipv4 00:23:33.226 subtype: nvme subsystem 00:23:33.226 treq: not specified, sq flow control disable supported 00:23:33.226 portid: 1 00:23:33.226 trsvcid: 4420 00:23:33.227 subnqn: nqn.2024-02.io.spdk:cnode0 00:23:33.227 traddr: 10.0.0.1 00:23:33.227 eflags: none 00:23:33.227 sectype: none 00:23:33.227 00:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:23:33.227 00:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:23:33.227 00:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:23:33.227 00:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:33.227 00:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:33.227 00:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:33.227 00:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:33.227 00:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:33.227 00:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTVhMDNiYWM2NjcyYzJiMWFlNGQ4ZWQ1ZmU0M2ExMTVjYzFhYjgwZDkzYzY0YWI0Dbuwtw==: 00:23:33.227 00:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWJiMGM2MjdkODQ0YzczMDI4YzBjYzJhNzdmNTNjNmQwY2VhYzFiNTdiODRmOGYzIS8ruw==: 00:23:33.227 00:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:33.227 00:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:33.486 00:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTVhMDNiYWM2NjcyYzJiMWFlNGQ4ZWQ1ZmU0M2ExMTVjYzFhYjgwZDkzYzY0YWI0Dbuwtw==: 00:23:33.486 00:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWJiMGM2MjdkODQ0YzczMDI4YzBjYzJhNzdmNTNjNmQwY2VhYzFiNTdiODRmOGYzIS8ruw==: ]] 00:23:33.486 00:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWJiMGM2MjdkODQ0YzczMDI4YzBjYzJhNzdmNTNjNmQwY2VhYzFiNTdiODRmOGYzIS8ruw==: 00:23:33.486 00:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:23:33.486 00:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:23:33.486 00:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:23:33.486 00:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:33.486 00:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:23:33.486 00:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:33.486 00:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:23:33.486 00:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:33.486 00:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:33.486 00:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:33.486 00:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:33.486 00:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.486 00:07:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.486 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.486 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:33.486 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:33.486 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:33.486 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:33.486 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:33.486 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:33.486 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:33.486 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:33.486 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:33.486 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:33.486 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:33.486 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:33.486 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.486 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.486 nvme0n1 00:23:33.486 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.486 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:33.486 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:33.486 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.486 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.486 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.486 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:33.486 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:33.486 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.486 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.746 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.746 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:23:33.746 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:33.746 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:33.746 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:23:33.746 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:33.746 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:33.746 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:33.746 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:33.746 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWY3MzRjNmM3MGVhOGQ3NGI1YmE5NzZhMjFkNDkxNjJppkw/: 00:23:33.746 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzIxZjQxYjlmMjljMjczMDZjOWUwZmY5MWZmZjJkNjU1ZWEzZjQ5OTY0MjY2NmE1OTUwN2ExNGY4Y2U4ZWMwMHtMiAg=: 00:23:33.746 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:33.746 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:33.746 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWY3MzRjNmM3MGVhOGQ3NGI1YmE5NzZhMjFkNDkxNjJppkw/: 00:23:33.746 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzIxZjQxYjlmMjljMjczMDZjOWUwZmY5MWZmZjJkNjU1ZWEzZjQ5OTY0MjY2NmE1OTUwN2ExNGY4Y2U4ZWMwMHtMiAg=: ]] 00:23:33.746 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzIxZjQxYjlmMjljMjczMDZjOWUwZmY5MWZmZjJkNjU1ZWEzZjQ5OTY0MjY2NmE1OTUwN2ExNGY4Y2U4ZWMwMHtMiAg=: 00:23:33.746 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:23:33.746 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:33.746 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:33.746 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:33.746 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:33.746 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:33.746 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:33.746 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.746 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.746 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.746 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:33.746 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:33.746 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:33.746 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:33.746 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:33.746 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:33.746 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:33.746 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:33.746 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:33.746 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:33.746 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:33.746 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:33.746 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.746 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.746 nvme0n1 00:23:33.746 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.746 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:33.746 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:33.746 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.746 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.746 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.746 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:33.746 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:33.746 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.746 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.746 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.746 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:33.746 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:33.746 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:33.746 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:33.746 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:33.746 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:33.746 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTVhMDNiYWM2NjcyYzJiMWFlNGQ4ZWQ1ZmU0M2ExMTVjYzFhYjgwZDkzYzY0YWI0Dbuwtw==: 00:23:33.746 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWJiMGM2MjdkODQ0YzczMDI4YzBjYzJhNzdmNTNjNmQwY2VhYzFiNTdiODRmOGYzIS8ruw==: 00:23:33.746 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:33.746 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:33.746 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTVhMDNiYWM2NjcyYzJiMWFlNGQ4ZWQ1ZmU0M2ExMTVjYzFhYjgwZDkzYzY0YWI0Dbuwtw==: 00:23:33.746 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWJiMGM2MjdkODQ0YzczMDI4YzBjYzJhNzdmNTNjNmQwY2VhYzFiNTdiODRmOGYzIS8ruw==: ]] 00:23:33.746 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWJiMGM2MjdkODQ0YzczMDI4YzBjYzJhNzdmNTNjNmQwY2VhYzFiNTdiODRmOGYzIS8ruw==: 00:23:33.746 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:23:33.746 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:33.746 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:33.746 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:33.746 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:33.746 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:33.746 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:33.746 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.746 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.746 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.746 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:33.746 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:33.747 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:33.747 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:33.747 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:33.747 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:33.747 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:33.747 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:33.747 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:33.747 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:33.747 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:33.747 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:33.747 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.747 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.006 nvme0n1 00:23:34.006 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.006 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:34.006 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:34.006 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.006 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.006 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.006 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:34.006 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:34.006 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.006 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.006 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.006 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:34.006 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:23:34.006 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:34.006 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:34.006 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:34.006 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:34.006 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDJjZDAxZmNmODU1ODEwNWI5YzNiODFkYmI5Y2E5ZDkidGVq: 00:23:34.006 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDllMGYyNmVhMDliMjgzZDM1NWM3NTcyNjQ3OTVjMTWWjlWc: 00:23:34.006 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:34.006 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:34.006 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDJjZDAxZmNmODU1ODEwNWI5YzNiODFkYmI5Y2E5ZDkidGVq: 00:23:34.006 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDllMGYyNmVhMDliMjgzZDM1NWM3NTcyNjQ3OTVjMTWWjlWc: ]] 00:23:34.006 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDllMGYyNmVhMDliMjgzZDM1NWM3NTcyNjQ3OTVjMTWWjlWc: 00:23:34.006 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:23:34.006 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:34.006 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:34.006 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:34.006 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:34.006 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:34.006 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:34.006 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.006 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.006 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.006 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:34.006 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:34.006 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:34.006 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:34.006 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:34.006 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:34.006 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:34.006 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:34.006 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:34.006 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:34.006 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:34.006 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:34.006 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.006 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.006 nvme0n1 00:23:34.006 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.006 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:34.006 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.006 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.006 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:34.006 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.265 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:34.265 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:34.265 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.265 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.265 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.265 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:34.265 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:23:34.265 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:34.265 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:34.265 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:34.265 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:34.265 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2M1MTAwN2Y0YzdhMGQxYTI0ODg4MTg3ODZlNDhmYjgxYmY2MWI5ZGVhOTQwM2ZlukBKfg==: 00:23:34.265 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTI2NmE3ZjNjMzIyNmFlOGU2NjI4YzcwYzA2ZWNkNWTZSj/g: 00:23:34.265 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:34.265 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:34.265 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2M1MTAwN2Y0YzdhMGQxYTI0ODg4MTg3ODZlNDhmYjgxYmY2MWI5ZGVhOTQwM2ZlukBKfg==: 00:23:34.265 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTI2NmE3ZjNjMzIyNmFlOGU2NjI4YzcwYzA2ZWNkNWTZSj/g: ]] 00:23:34.265 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTI2NmE3ZjNjMzIyNmFlOGU2NjI4YzcwYzA2ZWNkNWTZSj/g: 00:23:34.265 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:23:34.265 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:34.265 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:34.265 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:34.265 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:34.265 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:34.265 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:34.265 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.265 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.265 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.265 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:34.265 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:34.265 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:34.265 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:34.265 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:34.265 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:34.265 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:34.265 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:34.265 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:34.265 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:34.265 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:34.266 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:34.266 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.266 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.266 nvme0n1 00:23:34.266 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.266 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:34.266 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:34.266 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.266 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.266 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.266 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:34.266 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:34.266 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.266 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.266 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.266 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:34.266 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:23:34.266 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:34.266 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:34.266 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:34.266 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:34.266 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGMzMDcxMzBhOGE4Njg5MzVmZDJjNzQ3MzMyYmY1NGMyYzEyMTE3N2RlNjljOGNkYWQzMGY3M2YzMjAwY2I4MVvVjn4=: 00:23:34.266 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:34.266 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:34.266 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:34.266 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGMzMDcxMzBhOGE4Njg5MzVmZDJjNzQ3MzMyYmY1NGMyYzEyMTE3N2RlNjljOGNkYWQzMGY3M2YzMjAwY2I4MVvVjn4=: 00:23:34.266 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:34.266 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:23:34.266 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:34.266 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:34.266 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:34.266 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:34.266 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:34.266 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:34.266 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.266 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.266 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.266 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:34.266 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:34.266 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:34.266 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:34.266 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:34.266 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:34.266 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:34.266 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:34.266 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:34.266 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:34.266 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:34.266 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:34.266 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.266 00:07:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.525 nvme0n1 00:23:34.525 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.525 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:34.525 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.525 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.525 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:34.525 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.525 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:34.525 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:34.525 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.525 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.525 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.525 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:34.525 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:34.525 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:23:34.525 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:34.525 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:34.525 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:34.525 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:34.525 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWY3MzRjNmM3MGVhOGQ3NGI1YmE5NzZhMjFkNDkxNjJppkw/: 00:23:34.525 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzIxZjQxYjlmMjljMjczMDZjOWUwZmY5MWZmZjJkNjU1ZWEzZjQ5OTY0MjY2NmE1OTUwN2ExNGY4Y2U4ZWMwMHtMiAg=: 00:23:34.525 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:34.525 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:34.783 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWY3MzRjNmM3MGVhOGQ3NGI1YmE5NzZhMjFkNDkxNjJppkw/: 00:23:34.784 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzIxZjQxYjlmMjljMjczMDZjOWUwZmY5MWZmZjJkNjU1ZWEzZjQ5OTY0MjY2NmE1OTUwN2ExNGY4Y2U4ZWMwMHtMiAg=: ]] 00:23:34.784 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzIxZjQxYjlmMjljMjczMDZjOWUwZmY5MWZmZjJkNjU1ZWEzZjQ5OTY0MjY2NmE1OTUwN2ExNGY4Y2U4ZWMwMHtMiAg=: 00:23:34.784 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:23:34.784 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:34.784 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:34.784 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:34.784 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:34.784 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:34.784 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:34.784 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.784 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.784 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.784 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:34.784 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:34.784 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:34.784 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:34.784 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:34.784 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:34.784 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:34.784 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:34.784 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:34.784 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:34.784 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:34.784 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:34.784 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.784 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.043 nvme0n1 00:23:35.043 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.043 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:35.043 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.043 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:35.043 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.043 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.043 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:35.043 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:35.043 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.043 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.043 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.043 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:35.043 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:23:35.043 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:35.043 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:35.043 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:35.043 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:35.043 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTVhMDNiYWM2NjcyYzJiMWFlNGQ4ZWQ1ZmU0M2ExMTVjYzFhYjgwZDkzYzY0YWI0Dbuwtw==: 00:23:35.043 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWJiMGM2MjdkODQ0YzczMDI4YzBjYzJhNzdmNTNjNmQwY2VhYzFiNTdiODRmOGYzIS8ruw==: 00:23:35.043 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:35.043 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:35.043 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTVhMDNiYWM2NjcyYzJiMWFlNGQ4ZWQ1ZmU0M2ExMTVjYzFhYjgwZDkzYzY0YWI0Dbuwtw==: 00:23:35.043 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWJiMGM2MjdkODQ0YzczMDI4YzBjYzJhNzdmNTNjNmQwY2VhYzFiNTdiODRmOGYzIS8ruw==: ]] 00:23:35.043 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWJiMGM2MjdkODQ0YzczMDI4YzBjYzJhNzdmNTNjNmQwY2VhYzFiNTdiODRmOGYzIS8ruw==: 00:23:35.043 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:23:35.043 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:35.043 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:35.043 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:35.043 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:35.043 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:35.043 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:35.043 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.043 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.043 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.043 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:35.043 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:35.043 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:35.043 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:35.043 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:35.043 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:35.043 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:35.043 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:35.043 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:35.043 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:35.043 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:35.043 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:35.043 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.043 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.303 nvme0n1 00:23:35.303 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.303 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:35.303 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:35.303 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.303 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.303 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.303 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:35.303 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:35.303 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.303 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.303 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.303 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:35.303 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:23:35.303 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:35.303 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:35.303 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:35.303 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:35.303 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDJjZDAxZmNmODU1ODEwNWI5YzNiODFkYmI5Y2E5ZDkidGVq: 00:23:35.303 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDllMGYyNmVhMDliMjgzZDM1NWM3NTcyNjQ3OTVjMTWWjlWc: 00:23:35.303 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:35.303 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:35.303 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDJjZDAxZmNmODU1ODEwNWI5YzNiODFkYmI5Y2E5ZDkidGVq: 00:23:35.303 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDllMGYyNmVhMDliMjgzZDM1NWM3NTcyNjQ3OTVjMTWWjlWc: ]] 00:23:35.303 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDllMGYyNmVhMDliMjgzZDM1NWM3NTcyNjQ3OTVjMTWWjlWc: 00:23:35.303 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:23:35.303 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:35.303 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:35.303 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:35.303 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:35.303 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:35.303 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:35.303 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.303 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.303 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.303 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:35.303 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:35.303 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:35.303 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:35.303 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:35.303 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:35.303 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:35.303 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:35.303 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:35.303 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:35.303 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:35.303 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:35.303 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.303 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.303 nvme0n1 00:23:35.303 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.303 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:35.303 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:35.303 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.303 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.303 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.303 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:35.303 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:35.303 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.303 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.563 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.563 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:35.563 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:23:35.563 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:35.563 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:35.563 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:35.563 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:35.563 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2M1MTAwN2Y0YzdhMGQxYTI0ODg4MTg3ODZlNDhmYjgxYmY2MWI5ZGVhOTQwM2ZlukBKfg==: 00:23:35.563 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTI2NmE3ZjNjMzIyNmFlOGU2NjI4YzcwYzA2ZWNkNWTZSj/g: 00:23:35.563 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:35.563 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:35.563 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2M1MTAwN2Y0YzdhMGQxYTI0ODg4MTg3ODZlNDhmYjgxYmY2MWI5ZGVhOTQwM2ZlukBKfg==: 00:23:35.563 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTI2NmE3ZjNjMzIyNmFlOGU2NjI4YzcwYzA2ZWNkNWTZSj/g: ]] 00:23:35.563 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTI2NmE3ZjNjMzIyNmFlOGU2NjI4YzcwYzA2ZWNkNWTZSj/g: 00:23:35.563 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:23:35.563 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:35.563 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:35.563 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:35.563 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:35.563 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:35.563 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:35.563 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.563 00:07:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.563 00:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.563 00:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:35.563 00:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:35.563 00:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:35.563 00:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:35.563 00:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:35.563 00:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:35.563 00:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:35.563 00:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:35.563 00:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:35.563 00:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:35.563 00:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:35.563 00:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:35.563 00:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.563 00:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.563 nvme0n1 00:23:35.563 00:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.563 00:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:35.563 00:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:35.563 00:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.563 00:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.563 00:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.563 00:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:35.563 00:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:35.563 00:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.563 00:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.563 00:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.563 00:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:35.563 00:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:23:35.563 00:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:35.563 00:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:35.563 00:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:35.563 00:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:35.563 00:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGMzMDcxMzBhOGE4Njg5MzVmZDJjNzQ3MzMyYmY1NGMyYzEyMTE3N2RlNjljOGNkYWQzMGY3M2YzMjAwY2I4MVvVjn4=: 00:23:35.563 00:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:35.563 00:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:35.563 00:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:35.563 00:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGMzMDcxMzBhOGE4Njg5MzVmZDJjNzQ3MzMyYmY1NGMyYzEyMTE3N2RlNjljOGNkYWQzMGY3M2YzMjAwY2I4MVvVjn4=: 00:23:35.563 00:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:35.563 00:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:23:35.563 00:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:35.563 00:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:35.563 00:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:35.563 00:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:35.563 00:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:35.563 00:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:35.563 00:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.563 00:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.563 00:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.563 00:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:35.563 00:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:35.563 00:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:35.563 00:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:35.563 00:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:35.563 00:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:35.563 00:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:35.563 00:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:35.563 00:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:35.563 00:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:35.563 00:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:35.563 00:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:35.563 00:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.563 00:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.823 nvme0n1 00:23:35.823 00:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.823 00:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:35.823 00:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:35.823 00:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.823 00:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.823 00:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.823 00:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:35.823 00:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:35.823 00:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.823 00:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.823 00:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.823 00:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:35.823 00:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:35.823 00:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:23:35.823 00:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:35.823 00:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:35.823 00:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:35.823 00:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:35.823 00:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWY3MzRjNmM3MGVhOGQ3NGI1YmE5NzZhMjFkNDkxNjJppkw/: 00:23:35.823 00:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzIxZjQxYjlmMjljMjczMDZjOWUwZmY5MWZmZjJkNjU1ZWEzZjQ5OTY0MjY2NmE1OTUwN2ExNGY4Y2U4ZWMwMHtMiAg=: 00:23:35.823 00:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:35.823 00:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:36.391 00:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWY3MzRjNmM3MGVhOGQ3NGI1YmE5NzZhMjFkNDkxNjJppkw/: 00:23:36.391 00:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzIxZjQxYjlmMjljMjczMDZjOWUwZmY5MWZmZjJkNjU1ZWEzZjQ5OTY0MjY2NmE1OTUwN2ExNGY4Y2U4ZWMwMHtMiAg=: ]] 00:23:36.391 00:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzIxZjQxYjlmMjljMjczMDZjOWUwZmY5MWZmZjJkNjU1ZWEzZjQ5OTY0MjY2NmE1OTUwN2ExNGY4Y2U4ZWMwMHtMiAg=: 00:23:36.391 00:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:23:36.391 00:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:36.391 00:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:36.391 00:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:36.391 00:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:36.391 00:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:36.391 00:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:36.391 00:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.391 00:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.391 00:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.391 00:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:36.391 00:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:36.391 00:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:36.391 00:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:36.391 00:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:36.391 00:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:36.391 00:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:36.391 00:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:36.391 00:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:36.391 00:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:36.391 00:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:36.391 00:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:36.391 00:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.391 00:07:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.650 nvme0n1 00:23:36.650 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.650 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:36.650 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.650 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.650 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:36.650 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.650 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:36.650 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:36.650 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.650 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.650 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.650 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:36.650 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:23:36.650 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:36.650 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:36.650 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:36.650 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:36.650 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTVhMDNiYWM2NjcyYzJiMWFlNGQ4ZWQ1ZmU0M2ExMTVjYzFhYjgwZDkzYzY0YWI0Dbuwtw==: 00:23:36.650 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWJiMGM2MjdkODQ0YzczMDI4YzBjYzJhNzdmNTNjNmQwY2VhYzFiNTdiODRmOGYzIS8ruw==: 00:23:36.650 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:36.650 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:36.650 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTVhMDNiYWM2NjcyYzJiMWFlNGQ4ZWQ1ZmU0M2ExMTVjYzFhYjgwZDkzYzY0YWI0Dbuwtw==: 00:23:36.650 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWJiMGM2MjdkODQ0YzczMDI4YzBjYzJhNzdmNTNjNmQwY2VhYzFiNTdiODRmOGYzIS8ruw==: ]] 00:23:36.650 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWJiMGM2MjdkODQ0YzczMDI4YzBjYzJhNzdmNTNjNmQwY2VhYzFiNTdiODRmOGYzIS8ruw==: 00:23:36.650 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:23:36.650 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:36.650 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:36.650 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:36.650 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:36.650 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:36.650 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:36.650 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.650 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.650 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.650 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:36.650 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:36.650 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:36.650 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:36.650 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:36.650 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:36.650 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:36.650 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:36.650 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:36.650 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:36.650 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:36.650 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:36.650 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.650 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.908 nvme0n1 00:23:36.908 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.908 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:36.908 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.908 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:36.908 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.908 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.908 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:36.908 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:36.908 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.908 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.908 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.908 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:36.908 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:23:36.908 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:36.908 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:36.908 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:36.908 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:36.908 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDJjZDAxZmNmODU1ODEwNWI5YzNiODFkYmI5Y2E5ZDkidGVq: 00:23:36.908 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDllMGYyNmVhMDliMjgzZDM1NWM3NTcyNjQ3OTVjMTWWjlWc: 00:23:36.908 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:36.908 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:36.908 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDJjZDAxZmNmODU1ODEwNWI5YzNiODFkYmI5Y2E5ZDkidGVq: 00:23:36.908 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDllMGYyNmVhMDliMjgzZDM1NWM3NTcyNjQ3OTVjMTWWjlWc: ]] 00:23:36.908 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDllMGYyNmVhMDliMjgzZDM1NWM3NTcyNjQ3OTVjMTWWjlWc: 00:23:36.908 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:23:36.908 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:36.908 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:36.908 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:36.908 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:36.908 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:36.908 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:36.908 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.908 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.908 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.908 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:36.908 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:36.908 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:36.908 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:36.908 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:36.908 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:36.908 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:36.908 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:36.908 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:36.908 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:36.908 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:36.908 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:36.908 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.908 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.166 nvme0n1 00:23:37.166 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.166 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:37.166 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.166 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:37.166 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.166 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.166 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:37.166 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:37.166 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.166 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.166 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.166 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:37.166 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:23:37.166 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:37.166 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:37.166 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:37.166 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:37.166 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2M1MTAwN2Y0YzdhMGQxYTI0ODg4MTg3ODZlNDhmYjgxYmY2MWI5ZGVhOTQwM2ZlukBKfg==: 00:23:37.166 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTI2NmE3ZjNjMzIyNmFlOGU2NjI4YzcwYzA2ZWNkNWTZSj/g: 00:23:37.166 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:37.166 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:37.166 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2M1MTAwN2Y0YzdhMGQxYTI0ODg4MTg3ODZlNDhmYjgxYmY2MWI5ZGVhOTQwM2ZlukBKfg==: 00:23:37.166 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTI2NmE3ZjNjMzIyNmFlOGU2NjI4YzcwYzA2ZWNkNWTZSj/g: ]] 00:23:37.166 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTI2NmE3ZjNjMzIyNmFlOGU2NjI4YzcwYzA2ZWNkNWTZSj/g: 00:23:37.166 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:23:37.166 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:37.166 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:37.166 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:37.166 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:37.166 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:37.166 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:37.166 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.166 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.166 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.166 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:37.166 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:37.166 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:37.166 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:37.167 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:37.167 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:37.167 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:37.167 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:37.167 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:37.167 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:37.167 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:37.167 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:37.167 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.167 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.425 nvme0n1 00:23:37.425 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.425 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:37.425 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.425 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.425 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:37.425 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.425 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:37.425 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:37.425 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.425 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.425 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.425 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:37.425 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:23:37.425 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:37.425 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:37.425 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:37.425 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:37.425 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGMzMDcxMzBhOGE4Njg5MzVmZDJjNzQ3MzMyYmY1NGMyYzEyMTE3N2RlNjljOGNkYWQzMGY3M2YzMjAwY2I4MVvVjn4=: 00:23:37.425 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:37.425 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:37.425 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:37.425 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGMzMDcxMzBhOGE4Njg5MzVmZDJjNzQ3MzMyYmY1NGMyYzEyMTE3N2RlNjljOGNkYWQzMGY3M2YzMjAwY2I4MVvVjn4=: 00:23:37.425 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:37.425 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:23:37.425 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:37.425 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:37.425 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:37.425 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:37.425 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:37.425 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:37.425 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.425 00:07:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.425 00:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.425 00:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:37.425 00:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:37.425 00:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:37.425 00:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:37.425 00:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:37.425 00:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:37.425 00:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:37.425 00:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:37.425 00:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:37.425 00:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:37.425 00:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:37.425 00:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:37.425 00:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.425 00:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.684 nvme0n1 00:23:37.684 00:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.684 00:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:37.684 00:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:37.684 00:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.684 00:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.684 00:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.684 00:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:37.684 00:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:37.684 00:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.684 00:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.684 00:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.684 00:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:37.684 00:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:37.684 00:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:23:37.684 00:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:37.684 00:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:37.684 00:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:37.684 00:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:37.684 00:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWY3MzRjNmM3MGVhOGQ3NGI1YmE5NzZhMjFkNDkxNjJppkw/: 00:23:37.684 00:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzIxZjQxYjlmMjljMjczMDZjOWUwZmY5MWZmZjJkNjU1ZWEzZjQ5OTY0MjY2NmE1OTUwN2ExNGY4Y2U4ZWMwMHtMiAg=: 00:23:37.684 00:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:37.684 00:07:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:39.059 00:07:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWY3MzRjNmM3MGVhOGQ3NGI1YmE5NzZhMjFkNDkxNjJppkw/: 00:23:39.059 00:07:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzIxZjQxYjlmMjljMjczMDZjOWUwZmY5MWZmZjJkNjU1ZWEzZjQ5OTY0MjY2NmE1OTUwN2ExNGY4Y2U4ZWMwMHtMiAg=: ]] 00:23:39.059 00:07:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzIxZjQxYjlmMjljMjczMDZjOWUwZmY5MWZmZjJkNjU1ZWEzZjQ5OTY0MjY2NmE1OTUwN2ExNGY4Y2U4ZWMwMHtMiAg=: 00:23:39.059 00:07:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:23:39.059 00:07:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:39.059 00:07:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:39.059 00:07:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:39.059 00:07:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:39.059 00:07:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:39.059 00:07:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:39.059 00:07:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.059 00:07:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.059 00:07:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.059 00:07:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:39.059 00:07:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:39.059 00:07:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:39.059 00:07:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:39.059 00:07:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:39.059 00:07:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:39.059 00:07:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:39.059 00:07:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:39.059 00:07:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:39.059 00:07:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:39.059 00:07:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:39.059 00:07:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:39.059 00:07:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.059 00:07:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.318 nvme0n1 00:23:39.318 00:07:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.318 00:07:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:39.318 00:07:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.318 00:07:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.318 00:07:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:39.318 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.577 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:39.577 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:39.577 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.577 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.577 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.577 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:39.577 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:23:39.577 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:39.577 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:39.577 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:39.577 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:39.577 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTVhMDNiYWM2NjcyYzJiMWFlNGQ4ZWQ1ZmU0M2ExMTVjYzFhYjgwZDkzYzY0YWI0Dbuwtw==: 00:23:39.577 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWJiMGM2MjdkODQ0YzczMDI4YzBjYzJhNzdmNTNjNmQwY2VhYzFiNTdiODRmOGYzIS8ruw==: 00:23:39.577 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:39.577 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:39.577 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTVhMDNiYWM2NjcyYzJiMWFlNGQ4ZWQ1ZmU0M2ExMTVjYzFhYjgwZDkzYzY0YWI0Dbuwtw==: 00:23:39.577 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWJiMGM2MjdkODQ0YzczMDI4YzBjYzJhNzdmNTNjNmQwY2VhYzFiNTdiODRmOGYzIS8ruw==: ]] 00:23:39.577 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWJiMGM2MjdkODQ0YzczMDI4YzBjYzJhNzdmNTNjNmQwY2VhYzFiNTdiODRmOGYzIS8ruw==: 00:23:39.577 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:23:39.577 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:39.577 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:39.577 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:39.577 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:39.577 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:39.577 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:39.577 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.577 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.577 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.577 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:39.577 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:39.577 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:39.577 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:39.577 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:39.577 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:39.577 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:39.577 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:39.577 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:39.577 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:39.577 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:39.577 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:39.577 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.577 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.836 nvme0n1 00:23:39.836 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.836 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:39.836 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:39.836 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.836 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.836 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.836 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:39.836 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:39.836 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.836 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.836 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.836 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:39.836 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:23:39.836 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:39.836 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:39.836 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:39.836 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:39.836 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDJjZDAxZmNmODU1ODEwNWI5YzNiODFkYmI5Y2E5ZDkidGVq: 00:23:39.836 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDllMGYyNmVhMDliMjgzZDM1NWM3NTcyNjQ3OTVjMTWWjlWc: 00:23:39.836 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:39.836 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:39.836 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDJjZDAxZmNmODU1ODEwNWI5YzNiODFkYmI5Y2E5ZDkidGVq: 00:23:39.836 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDllMGYyNmVhMDliMjgzZDM1NWM3NTcyNjQ3OTVjMTWWjlWc: ]] 00:23:39.836 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDllMGYyNmVhMDliMjgzZDM1NWM3NTcyNjQ3OTVjMTWWjlWc: 00:23:39.836 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:23:39.836 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:39.836 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:39.836 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:39.836 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:39.836 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:39.836 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:39.836 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.836 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.836 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.836 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:39.836 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:39.836 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:39.836 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:39.836 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:39.836 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:39.836 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:39.836 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:39.836 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:39.836 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:39.836 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:39.837 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:39.837 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.837 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.403 nvme0n1 00:23:40.403 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.403 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:40.403 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.403 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:40.403 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.403 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.403 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:40.403 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:40.403 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.403 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.403 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.403 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:40.403 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:23:40.403 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:40.403 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:40.403 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:40.403 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:40.403 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2M1MTAwN2Y0YzdhMGQxYTI0ODg4MTg3ODZlNDhmYjgxYmY2MWI5ZGVhOTQwM2ZlukBKfg==: 00:23:40.403 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTI2NmE3ZjNjMzIyNmFlOGU2NjI4YzcwYzA2ZWNkNWTZSj/g: 00:23:40.403 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:40.403 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:40.404 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2M1MTAwN2Y0YzdhMGQxYTI0ODg4MTg3ODZlNDhmYjgxYmY2MWI5ZGVhOTQwM2ZlukBKfg==: 00:23:40.404 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTI2NmE3ZjNjMzIyNmFlOGU2NjI4YzcwYzA2ZWNkNWTZSj/g: ]] 00:23:40.404 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTI2NmE3ZjNjMzIyNmFlOGU2NjI4YzcwYzA2ZWNkNWTZSj/g: 00:23:40.404 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:23:40.404 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:40.404 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:40.404 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:40.404 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:40.404 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:40.404 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:40.404 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.404 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.404 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.404 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:40.404 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:40.404 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:40.404 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:40.404 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:40.404 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:40.404 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:40.404 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:40.404 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:40.404 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:40.404 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:40.404 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:40.404 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.404 00:07:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.662 nvme0n1 00:23:40.662 00:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.662 00:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:40.662 00:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:40.662 00:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.662 00:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.662 00:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.662 00:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:40.662 00:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:40.662 00:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.662 00:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.662 00:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.662 00:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:40.662 00:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:23:40.662 00:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:40.662 00:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:40.662 00:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:40.662 00:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:40.662 00:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGMzMDcxMzBhOGE4Njg5MzVmZDJjNzQ3MzMyYmY1NGMyYzEyMTE3N2RlNjljOGNkYWQzMGY3M2YzMjAwY2I4MVvVjn4=: 00:23:40.662 00:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:40.662 00:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:40.662 00:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:40.662 00:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGMzMDcxMzBhOGE4Njg5MzVmZDJjNzQ3MzMyYmY1NGMyYzEyMTE3N2RlNjljOGNkYWQzMGY3M2YzMjAwY2I4MVvVjn4=: 00:23:40.662 00:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:40.662 00:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:23:40.662 00:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:40.662 00:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:40.662 00:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:40.662 00:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:40.662 00:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:40.662 00:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:40.662 00:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.662 00:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.662 00:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.662 00:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:40.662 00:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:40.662 00:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:40.662 00:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:40.662 00:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:40.662 00:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:40.662 00:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:40.662 00:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:40.662 00:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:40.662 00:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:40.662 00:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:40.662 00:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:40.662 00:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.662 00:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.921 nvme0n1 00:23:40.921 00:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.921 00:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:40.921 00:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.921 00:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:40.921 00:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.179 00:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.179 00:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:41.179 00:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:41.179 00:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.179 00:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.179 00:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.179 00:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:41.179 00:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:41.179 00:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:23:41.179 00:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:41.179 00:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:41.179 00:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:41.179 00:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:41.179 00:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWY3MzRjNmM3MGVhOGQ3NGI1YmE5NzZhMjFkNDkxNjJppkw/: 00:23:41.179 00:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzIxZjQxYjlmMjljMjczMDZjOWUwZmY5MWZmZjJkNjU1ZWEzZjQ5OTY0MjY2NmE1OTUwN2ExNGY4Y2U4ZWMwMHtMiAg=: 00:23:41.179 00:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:41.179 00:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:41.179 00:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWY3MzRjNmM3MGVhOGQ3NGI1YmE5NzZhMjFkNDkxNjJppkw/: 00:23:41.179 00:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzIxZjQxYjlmMjljMjczMDZjOWUwZmY5MWZmZjJkNjU1ZWEzZjQ5OTY0MjY2NmE1OTUwN2ExNGY4Y2U4ZWMwMHtMiAg=: ]] 00:23:41.179 00:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzIxZjQxYjlmMjljMjczMDZjOWUwZmY5MWZmZjJkNjU1ZWEzZjQ5OTY0MjY2NmE1OTUwN2ExNGY4Y2U4ZWMwMHtMiAg=: 00:23:41.179 00:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:23:41.179 00:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:41.179 00:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:41.179 00:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:41.179 00:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:41.180 00:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:41.180 00:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:41.180 00:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.180 00:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.180 00:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.180 00:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:41.180 00:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:41.180 00:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:41.180 00:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:41.180 00:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:41.180 00:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:41.180 00:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:41.180 00:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:41.180 00:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:41.180 00:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:41.180 00:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:41.180 00:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:41.180 00:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.180 00:07:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.747 nvme0n1 00:23:41.747 00:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.747 00:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:41.747 00:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.747 00:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.747 00:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:41.747 00:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.747 00:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:41.747 00:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:41.747 00:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.747 00:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.747 00:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.747 00:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:41.747 00:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:23:41.747 00:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:41.747 00:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:41.747 00:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:41.747 00:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:41.747 00:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTVhMDNiYWM2NjcyYzJiMWFlNGQ4ZWQ1ZmU0M2ExMTVjYzFhYjgwZDkzYzY0YWI0Dbuwtw==: 00:23:41.747 00:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWJiMGM2MjdkODQ0YzczMDI4YzBjYzJhNzdmNTNjNmQwY2VhYzFiNTdiODRmOGYzIS8ruw==: 00:23:41.747 00:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:41.747 00:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:41.747 00:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTVhMDNiYWM2NjcyYzJiMWFlNGQ4ZWQ1ZmU0M2ExMTVjYzFhYjgwZDkzYzY0YWI0Dbuwtw==: 00:23:41.747 00:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWJiMGM2MjdkODQ0YzczMDI4YzBjYzJhNzdmNTNjNmQwY2VhYzFiNTdiODRmOGYzIS8ruw==: ]] 00:23:41.747 00:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWJiMGM2MjdkODQ0YzczMDI4YzBjYzJhNzdmNTNjNmQwY2VhYzFiNTdiODRmOGYzIS8ruw==: 00:23:41.747 00:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:23:41.747 00:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:41.747 00:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:41.747 00:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:41.747 00:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:41.747 00:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:41.747 00:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:41.747 00:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.747 00:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.747 00:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.747 00:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:41.747 00:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:41.747 00:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:41.747 00:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:41.747 00:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:41.747 00:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:41.747 00:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:41.747 00:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:41.747 00:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:41.747 00:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:41.747 00:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:41.747 00:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:41.747 00:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.747 00:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.314 nvme0n1 00:23:42.314 00:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.314 00:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:42.314 00:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:42.314 00:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.314 00:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.315 00:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.315 00:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:42.315 00:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:42.315 00:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.315 00:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.315 00:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.315 00:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:42.315 00:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:23:42.315 00:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:42.315 00:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:42.315 00:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:42.315 00:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:42.315 00:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDJjZDAxZmNmODU1ODEwNWI5YzNiODFkYmI5Y2E5ZDkidGVq: 00:23:42.315 00:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDllMGYyNmVhMDliMjgzZDM1NWM3NTcyNjQ3OTVjMTWWjlWc: 00:23:42.315 00:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:42.315 00:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:42.315 00:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDJjZDAxZmNmODU1ODEwNWI5YzNiODFkYmI5Y2E5ZDkidGVq: 00:23:42.315 00:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDllMGYyNmVhMDliMjgzZDM1NWM3NTcyNjQ3OTVjMTWWjlWc: ]] 00:23:42.315 00:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDllMGYyNmVhMDliMjgzZDM1NWM3NTcyNjQ3OTVjMTWWjlWc: 00:23:42.315 00:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:23:42.315 00:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:42.315 00:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:42.315 00:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:42.315 00:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:42.315 00:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:42.315 00:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:42.315 00:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.315 00:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.315 00:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.315 00:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:42.315 00:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:42.315 00:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:42.315 00:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:42.315 00:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:42.315 00:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:42.315 00:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:42.315 00:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:42.315 00:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:42.315 00:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:42.315 00:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:42.315 00:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:42.315 00:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.315 00:07:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.880 nvme0n1 00:23:42.880 00:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.880 00:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:42.880 00:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.880 00:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.880 00:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:42.880 00:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.880 00:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:42.880 00:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:42.880 00:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.880 00:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.880 00:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.880 00:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:42.880 00:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:23:42.880 00:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:42.880 00:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:42.880 00:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:42.880 00:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:42.880 00:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2M1MTAwN2Y0YzdhMGQxYTI0ODg4MTg3ODZlNDhmYjgxYmY2MWI5ZGVhOTQwM2ZlukBKfg==: 00:23:42.880 00:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTI2NmE3ZjNjMzIyNmFlOGU2NjI4YzcwYzA2ZWNkNWTZSj/g: 00:23:42.880 00:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:42.880 00:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:42.880 00:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2M1MTAwN2Y0YzdhMGQxYTI0ODg4MTg3ODZlNDhmYjgxYmY2MWI5ZGVhOTQwM2ZlukBKfg==: 00:23:42.880 00:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTI2NmE3ZjNjMzIyNmFlOGU2NjI4YzcwYzA2ZWNkNWTZSj/g: ]] 00:23:42.880 00:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTI2NmE3ZjNjMzIyNmFlOGU2NjI4YzcwYzA2ZWNkNWTZSj/g: 00:23:42.880 00:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:23:42.880 00:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:42.880 00:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:42.880 00:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:42.880 00:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:42.880 00:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:42.880 00:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:42.880 00:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.880 00:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.880 00:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.880 00:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:42.880 00:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:42.880 00:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:42.880 00:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:42.880 00:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:42.880 00:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:42.880 00:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:42.880 00:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:42.880 00:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:42.880 00:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:42.880 00:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:42.880 00:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:42.881 00:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.881 00:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.448 nvme0n1 00:23:43.448 00:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.448 00:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:43.448 00:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:43.448 00:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.448 00:07:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.448 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.448 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:43.448 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:43.448 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.448 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.448 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.448 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:43.448 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:23:43.448 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:43.448 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:43.448 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:43.448 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:43.448 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGMzMDcxMzBhOGE4Njg5MzVmZDJjNzQ3MzMyYmY1NGMyYzEyMTE3N2RlNjljOGNkYWQzMGY3M2YzMjAwY2I4MVvVjn4=: 00:23:43.448 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:43.448 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:43.448 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:43.448 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGMzMDcxMzBhOGE4Njg5MzVmZDJjNzQ3MzMyYmY1NGMyYzEyMTE3N2RlNjljOGNkYWQzMGY3M2YzMjAwY2I4MVvVjn4=: 00:23:43.448 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:43.448 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:23:43.448 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:43.448 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:43.448 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:43.449 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:43.449 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:43.449 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:43.449 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.449 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.449 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.449 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:43.449 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:43.449 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:43.449 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:43.449 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:43.449 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:43.449 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:43.449 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:43.449 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:43.449 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:43.449 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:43.449 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:43.449 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.449 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.016 nvme0n1 00:23:44.016 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.016 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:44.016 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:44.016 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.016 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.016 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.016 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:44.016 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:44.016 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.016 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.016 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.016 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:23:44.016 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:44.016 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:44.016 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:23:44.016 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:44.016 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:44.016 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:44.016 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:44.016 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWY3MzRjNmM3MGVhOGQ3NGI1YmE5NzZhMjFkNDkxNjJppkw/: 00:23:44.016 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzIxZjQxYjlmMjljMjczMDZjOWUwZmY5MWZmZjJkNjU1ZWEzZjQ5OTY0MjY2NmE1OTUwN2ExNGY4Y2U4ZWMwMHtMiAg=: 00:23:44.016 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:44.016 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:44.016 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWY3MzRjNmM3MGVhOGQ3NGI1YmE5NzZhMjFkNDkxNjJppkw/: 00:23:44.016 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzIxZjQxYjlmMjljMjczMDZjOWUwZmY5MWZmZjJkNjU1ZWEzZjQ5OTY0MjY2NmE1OTUwN2ExNGY4Y2U4ZWMwMHtMiAg=: ]] 00:23:44.016 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzIxZjQxYjlmMjljMjczMDZjOWUwZmY5MWZmZjJkNjU1ZWEzZjQ5OTY0MjY2NmE1OTUwN2ExNGY4Y2U4ZWMwMHtMiAg=: 00:23:44.016 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:23:44.016 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:44.016 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:44.016 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:44.016 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:44.016 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:44.016 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:44.016 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.016 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.016 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.016 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:44.016 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:44.016 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:44.016 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:44.016 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:44.016 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:44.016 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:44.016 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:44.016 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:44.016 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:44.016 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:44.016 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:44.016 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.016 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.276 nvme0n1 00:23:44.276 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.276 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:44.276 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:44.276 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.276 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.276 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.276 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:44.276 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:44.276 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.276 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.276 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.276 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:44.276 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:23:44.276 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:44.276 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:44.276 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:44.276 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:44.276 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTVhMDNiYWM2NjcyYzJiMWFlNGQ4ZWQ1ZmU0M2ExMTVjYzFhYjgwZDkzYzY0YWI0Dbuwtw==: 00:23:44.276 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWJiMGM2MjdkODQ0YzczMDI4YzBjYzJhNzdmNTNjNmQwY2VhYzFiNTdiODRmOGYzIS8ruw==: 00:23:44.276 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:44.276 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:44.276 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTVhMDNiYWM2NjcyYzJiMWFlNGQ4ZWQ1ZmU0M2ExMTVjYzFhYjgwZDkzYzY0YWI0Dbuwtw==: 00:23:44.276 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWJiMGM2MjdkODQ0YzczMDI4YzBjYzJhNzdmNTNjNmQwY2VhYzFiNTdiODRmOGYzIS8ruw==: ]] 00:23:44.276 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWJiMGM2MjdkODQ0YzczMDI4YzBjYzJhNzdmNTNjNmQwY2VhYzFiNTdiODRmOGYzIS8ruw==: 00:23:44.276 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:23:44.276 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:44.276 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:44.276 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:44.276 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:44.276 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:44.276 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:44.276 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.276 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.276 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.276 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:44.276 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:44.276 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:44.276 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:44.276 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:44.276 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:44.276 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:44.276 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:44.276 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:44.276 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:44.276 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:44.276 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:44.276 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.276 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.536 nvme0n1 00:23:44.536 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.536 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:44.536 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:44.536 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.536 00:07:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.536 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.536 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:44.536 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:44.536 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.536 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.536 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.536 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:44.536 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:23:44.536 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:44.536 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:44.536 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:44.536 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:44.536 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDJjZDAxZmNmODU1ODEwNWI5YzNiODFkYmI5Y2E5ZDkidGVq: 00:23:44.536 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDllMGYyNmVhMDliMjgzZDM1NWM3NTcyNjQ3OTVjMTWWjlWc: 00:23:44.536 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:44.536 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:44.536 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDJjZDAxZmNmODU1ODEwNWI5YzNiODFkYmI5Y2E5ZDkidGVq: 00:23:44.536 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDllMGYyNmVhMDliMjgzZDM1NWM3NTcyNjQ3OTVjMTWWjlWc: ]] 00:23:44.536 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDllMGYyNmVhMDliMjgzZDM1NWM3NTcyNjQ3OTVjMTWWjlWc: 00:23:44.536 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:23:44.536 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:44.536 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:44.536 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:44.536 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:44.536 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:44.536 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:44.536 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.536 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.536 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.536 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:44.536 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:44.536 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:44.536 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:44.536 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:44.536 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:44.536 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:44.536 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:44.536 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:44.536 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:44.536 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:44.536 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:44.536 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.536 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.536 nvme0n1 00:23:44.536 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.536 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:44.536 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:44.536 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.536 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.536 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.796 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:44.796 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:44.796 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.796 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.796 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.796 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:44.796 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:23:44.796 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:44.796 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:44.796 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:44.796 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:44.796 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2M1MTAwN2Y0YzdhMGQxYTI0ODg4MTg3ODZlNDhmYjgxYmY2MWI5ZGVhOTQwM2ZlukBKfg==: 00:23:44.796 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTI2NmE3ZjNjMzIyNmFlOGU2NjI4YzcwYzA2ZWNkNWTZSj/g: 00:23:44.796 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:44.796 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:44.796 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2M1MTAwN2Y0YzdhMGQxYTI0ODg4MTg3ODZlNDhmYjgxYmY2MWI5ZGVhOTQwM2ZlukBKfg==: 00:23:44.796 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTI2NmE3ZjNjMzIyNmFlOGU2NjI4YzcwYzA2ZWNkNWTZSj/g: ]] 00:23:44.796 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTI2NmE3ZjNjMzIyNmFlOGU2NjI4YzcwYzA2ZWNkNWTZSj/g: 00:23:44.796 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:23:44.796 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:44.796 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:44.796 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:44.796 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:44.796 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:44.796 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:44.796 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.796 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.796 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.796 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:44.796 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:44.796 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:44.796 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:44.796 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:44.796 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:44.796 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:44.796 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:44.796 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:44.796 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:44.796 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:44.796 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:44.796 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.796 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.796 nvme0n1 00:23:44.796 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.796 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:44.796 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:44.796 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.796 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.796 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.796 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:44.796 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:44.796 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.796 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.796 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.796 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:44.796 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:23:44.796 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:44.796 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:44.796 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:44.796 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:44.796 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGMzMDcxMzBhOGE4Njg5MzVmZDJjNzQ3MzMyYmY1NGMyYzEyMTE3N2RlNjljOGNkYWQzMGY3M2YzMjAwY2I4MVvVjn4=: 00:23:44.796 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:44.796 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:44.796 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:44.796 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGMzMDcxMzBhOGE4Njg5MzVmZDJjNzQ3MzMyYmY1NGMyYzEyMTE3N2RlNjljOGNkYWQzMGY3M2YzMjAwY2I4MVvVjn4=: 00:23:44.796 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:44.796 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:23:44.796 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:44.796 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:44.796 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:44.796 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:44.796 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:44.796 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:44.796 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.796 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.796 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.796 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:44.796 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:44.796 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:44.796 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:44.796 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:44.796 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:44.796 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:44.796 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:44.796 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:44.796 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:44.796 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:44.796 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:44.796 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.796 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.056 nvme0n1 00:23:45.056 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.056 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:45.056 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.056 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:45.056 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.056 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.056 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:45.056 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:45.056 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.056 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.056 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.056 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:45.056 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:45.056 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:23:45.056 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:45.056 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:45.056 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:45.056 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:45.056 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWY3MzRjNmM3MGVhOGQ3NGI1YmE5NzZhMjFkNDkxNjJppkw/: 00:23:45.056 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzIxZjQxYjlmMjljMjczMDZjOWUwZmY5MWZmZjJkNjU1ZWEzZjQ5OTY0MjY2NmE1OTUwN2ExNGY4Y2U4ZWMwMHtMiAg=: 00:23:45.056 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:45.056 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:45.056 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWY3MzRjNmM3MGVhOGQ3NGI1YmE5NzZhMjFkNDkxNjJppkw/: 00:23:45.056 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzIxZjQxYjlmMjljMjczMDZjOWUwZmY5MWZmZjJkNjU1ZWEzZjQ5OTY0MjY2NmE1OTUwN2ExNGY4Y2U4ZWMwMHtMiAg=: ]] 00:23:45.056 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzIxZjQxYjlmMjljMjczMDZjOWUwZmY5MWZmZjJkNjU1ZWEzZjQ5OTY0MjY2NmE1OTUwN2ExNGY4Y2U4ZWMwMHtMiAg=: 00:23:45.056 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:23:45.056 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:45.056 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:45.056 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:45.056 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:45.056 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:45.056 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:45.056 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.056 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.056 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.056 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:45.056 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:45.056 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:45.056 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:45.056 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:45.056 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:45.056 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:45.056 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:45.056 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:45.056 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:45.056 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:45.056 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:45.056 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.056 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.316 nvme0n1 00:23:45.316 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.316 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:45.316 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:45.316 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.316 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.316 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.316 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:45.316 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:45.316 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.316 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.316 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.316 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:45.316 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:23:45.316 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:45.316 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:45.316 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:45.316 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:45.316 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTVhMDNiYWM2NjcyYzJiMWFlNGQ4ZWQ1ZmU0M2ExMTVjYzFhYjgwZDkzYzY0YWI0Dbuwtw==: 00:23:45.316 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWJiMGM2MjdkODQ0YzczMDI4YzBjYzJhNzdmNTNjNmQwY2VhYzFiNTdiODRmOGYzIS8ruw==: 00:23:45.316 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:45.316 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:45.316 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTVhMDNiYWM2NjcyYzJiMWFlNGQ4ZWQ1ZmU0M2ExMTVjYzFhYjgwZDkzYzY0YWI0Dbuwtw==: 00:23:45.316 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWJiMGM2MjdkODQ0YzczMDI4YzBjYzJhNzdmNTNjNmQwY2VhYzFiNTdiODRmOGYzIS8ruw==: ]] 00:23:45.316 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWJiMGM2MjdkODQ0YzczMDI4YzBjYzJhNzdmNTNjNmQwY2VhYzFiNTdiODRmOGYzIS8ruw==: 00:23:45.316 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:23:45.316 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:45.316 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:45.316 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:45.316 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:45.316 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:45.316 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:45.316 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.316 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.316 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.316 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:45.316 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:45.316 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:45.316 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:45.316 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:45.316 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:45.316 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:45.316 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:45.316 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:45.316 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:45.316 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:45.316 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:45.316 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.316 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.316 nvme0n1 00:23:45.316 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.316 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:45.316 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.316 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.316 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:45.316 00:07:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.575 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:45.575 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:45.575 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.575 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.575 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.575 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:45.575 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:23:45.575 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:45.575 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:45.575 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:45.575 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:45.575 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDJjZDAxZmNmODU1ODEwNWI5YzNiODFkYmI5Y2E5ZDkidGVq: 00:23:45.575 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDllMGYyNmVhMDliMjgzZDM1NWM3NTcyNjQ3OTVjMTWWjlWc: 00:23:45.575 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:45.575 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:45.575 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDJjZDAxZmNmODU1ODEwNWI5YzNiODFkYmI5Y2E5ZDkidGVq: 00:23:45.575 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDllMGYyNmVhMDliMjgzZDM1NWM3NTcyNjQ3OTVjMTWWjlWc: ]] 00:23:45.575 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDllMGYyNmVhMDliMjgzZDM1NWM3NTcyNjQ3OTVjMTWWjlWc: 00:23:45.575 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:23:45.575 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:45.575 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:45.575 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:45.575 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:45.575 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:45.575 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:45.575 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.575 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.575 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.575 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:45.575 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:45.575 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:45.575 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:45.575 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:45.575 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:45.575 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:45.575 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:45.576 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:45.576 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:45.576 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:45.576 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:45.576 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.576 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.576 nvme0n1 00:23:45.576 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.576 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:45.576 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:45.576 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.576 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.576 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.576 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:45.576 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:45.576 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.576 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.835 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.835 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:45.835 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:23:45.835 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:45.835 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:45.835 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:45.835 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:45.835 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2M1MTAwN2Y0YzdhMGQxYTI0ODg4MTg3ODZlNDhmYjgxYmY2MWI5ZGVhOTQwM2ZlukBKfg==: 00:23:45.835 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTI2NmE3ZjNjMzIyNmFlOGU2NjI4YzcwYzA2ZWNkNWTZSj/g: 00:23:45.835 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:45.835 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:45.835 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2M1MTAwN2Y0YzdhMGQxYTI0ODg4MTg3ODZlNDhmYjgxYmY2MWI5ZGVhOTQwM2ZlukBKfg==: 00:23:45.835 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTI2NmE3ZjNjMzIyNmFlOGU2NjI4YzcwYzA2ZWNkNWTZSj/g: ]] 00:23:45.835 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTI2NmE3ZjNjMzIyNmFlOGU2NjI4YzcwYzA2ZWNkNWTZSj/g: 00:23:45.835 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:23:45.835 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:45.835 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:45.835 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:45.835 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:45.835 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:45.835 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:45.835 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.835 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.835 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.835 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:45.835 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:45.835 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:45.835 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:45.835 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:45.835 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:45.835 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:45.835 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:45.835 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:45.835 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:45.835 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:45.835 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:45.835 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.835 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.835 nvme0n1 00:23:45.835 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.835 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:45.835 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.835 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:45.835 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.835 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.835 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:45.835 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:45.835 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.835 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.835 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.835 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:45.835 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:23:45.835 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:45.835 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:45.835 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:45.835 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:45.835 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGMzMDcxMzBhOGE4Njg5MzVmZDJjNzQ3MzMyYmY1NGMyYzEyMTE3N2RlNjljOGNkYWQzMGY3M2YzMjAwY2I4MVvVjn4=: 00:23:45.835 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:45.835 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:45.835 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:45.835 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGMzMDcxMzBhOGE4Njg5MzVmZDJjNzQ3MzMyYmY1NGMyYzEyMTE3N2RlNjljOGNkYWQzMGY3M2YzMjAwY2I4MVvVjn4=: 00:23:45.835 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:45.836 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:23:45.836 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:45.836 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:45.836 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:45.836 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:45.836 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:45.836 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:45.836 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.836 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.836 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.836 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:45.836 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:45.836 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:45.836 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:45.836 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:45.836 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:45.836 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:45.836 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:45.836 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:45.836 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:45.836 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:45.836 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:45.836 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.836 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.095 nvme0n1 00:23:46.095 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.095 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:46.095 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:46.095 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.095 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.095 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.095 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:46.095 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:46.095 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.095 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.095 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.095 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:46.095 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:46.095 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:23:46.095 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:46.095 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:46.095 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:46.095 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:46.095 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWY3MzRjNmM3MGVhOGQ3NGI1YmE5NzZhMjFkNDkxNjJppkw/: 00:23:46.095 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzIxZjQxYjlmMjljMjczMDZjOWUwZmY5MWZmZjJkNjU1ZWEzZjQ5OTY0MjY2NmE1OTUwN2ExNGY4Y2U4ZWMwMHtMiAg=: 00:23:46.095 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:46.095 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:46.095 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWY3MzRjNmM3MGVhOGQ3NGI1YmE5NzZhMjFkNDkxNjJppkw/: 00:23:46.095 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzIxZjQxYjlmMjljMjczMDZjOWUwZmY5MWZmZjJkNjU1ZWEzZjQ5OTY0MjY2NmE1OTUwN2ExNGY4Y2U4ZWMwMHtMiAg=: ]] 00:23:46.095 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzIxZjQxYjlmMjljMjczMDZjOWUwZmY5MWZmZjJkNjU1ZWEzZjQ5OTY0MjY2NmE1OTUwN2ExNGY4Y2U4ZWMwMHtMiAg=: 00:23:46.095 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:23:46.095 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:46.095 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:46.095 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:46.095 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:46.095 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:46.095 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:46.095 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.095 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.095 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.095 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:46.095 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:46.095 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:46.095 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:46.095 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:46.095 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:46.095 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:46.095 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:46.095 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:46.095 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:46.095 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:46.095 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:46.095 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.095 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.355 nvme0n1 00:23:46.355 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.355 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:46.355 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:46.355 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.355 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.355 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.355 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:46.355 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:46.355 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.355 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.355 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.355 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:46.355 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:23:46.355 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:46.355 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:46.355 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:46.355 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:46.355 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTVhMDNiYWM2NjcyYzJiMWFlNGQ4ZWQ1ZmU0M2ExMTVjYzFhYjgwZDkzYzY0YWI0Dbuwtw==: 00:23:46.355 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWJiMGM2MjdkODQ0YzczMDI4YzBjYzJhNzdmNTNjNmQwY2VhYzFiNTdiODRmOGYzIS8ruw==: 00:23:46.355 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:46.355 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:46.355 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTVhMDNiYWM2NjcyYzJiMWFlNGQ4ZWQ1ZmU0M2ExMTVjYzFhYjgwZDkzYzY0YWI0Dbuwtw==: 00:23:46.355 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWJiMGM2MjdkODQ0YzczMDI4YzBjYzJhNzdmNTNjNmQwY2VhYzFiNTdiODRmOGYzIS8ruw==: ]] 00:23:46.355 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWJiMGM2MjdkODQ0YzczMDI4YzBjYzJhNzdmNTNjNmQwY2VhYzFiNTdiODRmOGYzIS8ruw==: 00:23:46.355 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:23:46.355 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:46.355 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:46.355 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:46.355 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:46.355 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:46.355 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:46.355 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.355 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.355 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.355 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:46.355 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:46.355 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:46.355 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:46.355 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:46.355 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:46.355 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:46.355 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:46.355 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:46.355 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:46.355 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:46.355 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:46.355 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.355 00:07:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.614 nvme0n1 00:23:46.614 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.614 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:46.614 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:46.614 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.614 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.614 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.614 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:46.614 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:46.614 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.615 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.615 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.615 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:46.615 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:23:46.615 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:46.615 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:46.615 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:46.615 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:46.615 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDJjZDAxZmNmODU1ODEwNWI5YzNiODFkYmI5Y2E5ZDkidGVq: 00:23:46.615 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDllMGYyNmVhMDliMjgzZDM1NWM3NTcyNjQ3OTVjMTWWjlWc: 00:23:46.615 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:46.615 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:46.615 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDJjZDAxZmNmODU1ODEwNWI5YzNiODFkYmI5Y2E5ZDkidGVq: 00:23:46.615 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDllMGYyNmVhMDliMjgzZDM1NWM3NTcyNjQ3OTVjMTWWjlWc: ]] 00:23:46.615 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDllMGYyNmVhMDliMjgzZDM1NWM3NTcyNjQ3OTVjMTWWjlWc: 00:23:46.615 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:23:46.615 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:46.615 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:46.615 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:46.615 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:46.615 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:46.615 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:46.615 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.615 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.615 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.615 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:46.615 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:46.615 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:46.615 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:46.615 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:46.615 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:46.615 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:46.615 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:46.615 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:46.615 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:46.615 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:46.615 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:46.615 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.615 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.874 nvme0n1 00:23:46.874 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.874 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:46.874 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.874 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.874 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:46.874 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.874 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:46.874 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:46.874 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.874 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.874 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.874 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:46.874 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:23:46.874 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:46.874 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:46.874 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:46.874 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:46.874 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2M1MTAwN2Y0YzdhMGQxYTI0ODg4MTg3ODZlNDhmYjgxYmY2MWI5ZGVhOTQwM2ZlukBKfg==: 00:23:46.874 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTI2NmE3ZjNjMzIyNmFlOGU2NjI4YzcwYzA2ZWNkNWTZSj/g: 00:23:46.874 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:46.874 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:46.874 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2M1MTAwN2Y0YzdhMGQxYTI0ODg4MTg3ODZlNDhmYjgxYmY2MWI5ZGVhOTQwM2ZlukBKfg==: 00:23:46.874 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTI2NmE3ZjNjMzIyNmFlOGU2NjI4YzcwYzA2ZWNkNWTZSj/g: ]] 00:23:46.874 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTI2NmE3ZjNjMzIyNmFlOGU2NjI4YzcwYzA2ZWNkNWTZSj/g: 00:23:46.874 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:23:46.874 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:46.874 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:46.874 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:46.874 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:46.874 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:46.874 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:46.874 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.874 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.874 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.874 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:46.874 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:46.874 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:46.874 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:46.874 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:46.874 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:46.874 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:46.874 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:46.874 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:46.874 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:46.874 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:46.874 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:46.874 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.874 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.133 nvme0n1 00:23:47.133 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.133 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:47.133 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.133 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.133 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:47.133 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.133 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:47.133 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:47.133 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.133 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.133 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.133 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:47.133 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:23:47.133 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:47.133 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:47.133 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:47.133 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:47.133 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGMzMDcxMzBhOGE4Njg5MzVmZDJjNzQ3MzMyYmY1NGMyYzEyMTE3N2RlNjljOGNkYWQzMGY3M2YzMjAwY2I4MVvVjn4=: 00:23:47.133 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:47.133 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:47.133 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:47.133 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGMzMDcxMzBhOGE4Njg5MzVmZDJjNzQ3MzMyYmY1NGMyYzEyMTE3N2RlNjljOGNkYWQzMGY3M2YzMjAwY2I4MVvVjn4=: 00:23:47.133 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:47.133 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:23:47.133 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:47.133 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:47.133 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:47.133 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:47.133 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:47.133 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:47.133 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.133 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.133 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.133 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:47.133 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:47.133 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:47.133 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:47.133 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:47.133 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:47.133 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:47.133 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:47.133 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:47.133 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:47.133 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:47.133 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:47.133 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.133 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.393 nvme0n1 00:23:47.393 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.393 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:47.393 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.393 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.393 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:47.393 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.393 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:47.393 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:47.393 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.393 00:07:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.393 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.393 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:47.393 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:47.393 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:23:47.393 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:47.393 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:47.393 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:47.393 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:47.393 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWY3MzRjNmM3MGVhOGQ3NGI1YmE5NzZhMjFkNDkxNjJppkw/: 00:23:47.393 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzIxZjQxYjlmMjljMjczMDZjOWUwZmY5MWZmZjJkNjU1ZWEzZjQ5OTY0MjY2NmE1OTUwN2ExNGY4Y2U4ZWMwMHtMiAg=: 00:23:47.393 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:47.393 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:47.393 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWY3MzRjNmM3MGVhOGQ3NGI1YmE5NzZhMjFkNDkxNjJppkw/: 00:23:47.393 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzIxZjQxYjlmMjljMjczMDZjOWUwZmY5MWZmZjJkNjU1ZWEzZjQ5OTY0MjY2NmE1OTUwN2ExNGY4Y2U4ZWMwMHtMiAg=: ]] 00:23:47.393 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzIxZjQxYjlmMjljMjczMDZjOWUwZmY5MWZmZjJkNjU1ZWEzZjQ5OTY0MjY2NmE1OTUwN2ExNGY4Y2U4ZWMwMHtMiAg=: 00:23:47.393 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:23:47.393 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:47.393 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:47.393 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:47.393 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:47.393 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:47.393 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:47.393 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.393 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.393 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.393 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:47.393 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:47.393 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:47.393 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:47.393 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:47.393 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:47.393 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:47.393 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:47.393 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:47.393 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:47.393 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:47.393 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:47.393 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.393 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.652 nvme0n1 00:23:47.652 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.652 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:47.652 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:47.652 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.652 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.911 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.911 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:47.911 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:47.911 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.911 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.911 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.911 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:47.911 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:23:47.911 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:47.911 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:47.911 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:47.911 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:47.911 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTVhMDNiYWM2NjcyYzJiMWFlNGQ4ZWQ1ZmU0M2ExMTVjYzFhYjgwZDkzYzY0YWI0Dbuwtw==: 00:23:47.912 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWJiMGM2MjdkODQ0YzczMDI4YzBjYzJhNzdmNTNjNmQwY2VhYzFiNTdiODRmOGYzIS8ruw==: 00:23:47.912 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:47.912 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:47.912 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTVhMDNiYWM2NjcyYzJiMWFlNGQ4ZWQ1ZmU0M2ExMTVjYzFhYjgwZDkzYzY0YWI0Dbuwtw==: 00:23:47.912 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWJiMGM2MjdkODQ0YzczMDI4YzBjYzJhNzdmNTNjNmQwY2VhYzFiNTdiODRmOGYzIS8ruw==: ]] 00:23:47.912 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWJiMGM2MjdkODQ0YzczMDI4YzBjYzJhNzdmNTNjNmQwY2VhYzFiNTdiODRmOGYzIS8ruw==: 00:23:47.912 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:23:47.912 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:47.912 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:47.912 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:47.912 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:47.912 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:47.912 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:47.912 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.912 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.912 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.912 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:47.912 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:47.912 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:47.912 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:47.912 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:47.912 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:47.912 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:47.912 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:47.912 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:47.912 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:47.912 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:47.912 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:47.912 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.912 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.171 nvme0n1 00:23:48.171 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.171 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:48.171 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:48.171 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.171 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.171 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.171 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:48.171 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:48.171 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.171 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.171 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.171 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:48.171 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:23:48.171 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:48.171 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:48.171 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:48.171 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:48.171 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDJjZDAxZmNmODU1ODEwNWI5YzNiODFkYmI5Y2E5ZDkidGVq: 00:23:48.171 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDllMGYyNmVhMDliMjgzZDM1NWM3NTcyNjQ3OTVjMTWWjlWc: 00:23:48.171 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:48.171 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:48.171 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDJjZDAxZmNmODU1ODEwNWI5YzNiODFkYmI5Y2E5ZDkidGVq: 00:23:48.171 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDllMGYyNmVhMDliMjgzZDM1NWM3NTcyNjQ3OTVjMTWWjlWc: ]] 00:23:48.171 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDllMGYyNmVhMDliMjgzZDM1NWM3NTcyNjQ3OTVjMTWWjlWc: 00:23:48.171 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:23:48.171 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:48.171 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:48.171 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:48.171 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:48.171 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:48.171 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:48.171 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.171 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.171 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.171 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:48.171 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:48.171 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:48.171 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:48.171 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:48.171 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:48.171 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:48.172 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:48.172 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:48.172 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:48.172 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:48.172 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:48.172 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.172 00:07:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.769 nvme0n1 00:23:48.769 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.769 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:48.769 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.769 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.769 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:48.769 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.769 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:48.769 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:48.769 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.769 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.769 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.769 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:48.769 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:23:48.769 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:48.769 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:48.769 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:48.769 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:48.769 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2M1MTAwN2Y0YzdhMGQxYTI0ODg4MTg3ODZlNDhmYjgxYmY2MWI5ZGVhOTQwM2ZlukBKfg==: 00:23:48.769 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTI2NmE3ZjNjMzIyNmFlOGU2NjI4YzcwYzA2ZWNkNWTZSj/g: 00:23:48.769 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:48.769 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:48.769 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2M1MTAwN2Y0YzdhMGQxYTI0ODg4MTg3ODZlNDhmYjgxYmY2MWI5ZGVhOTQwM2ZlukBKfg==: 00:23:48.769 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTI2NmE3ZjNjMzIyNmFlOGU2NjI4YzcwYzA2ZWNkNWTZSj/g: ]] 00:23:48.769 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTI2NmE3ZjNjMzIyNmFlOGU2NjI4YzcwYzA2ZWNkNWTZSj/g: 00:23:48.769 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:23:48.769 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:48.769 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:48.769 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:48.769 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:48.769 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:48.769 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:48.769 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.769 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.769 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.769 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:48.769 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:48.769 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:48.769 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:48.769 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:48.769 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:48.769 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:48.769 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:48.769 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:48.769 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:48.769 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:48.769 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:48.769 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.769 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.029 nvme0n1 00:23:49.029 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.029 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:49.029 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:49.029 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.029 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.029 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.029 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:49.029 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:49.029 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.029 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.029 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.029 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:49.029 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:23:49.029 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:49.029 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:49.029 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:49.029 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:49.029 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGMzMDcxMzBhOGE4Njg5MzVmZDJjNzQ3MzMyYmY1NGMyYzEyMTE3N2RlNjljOGNkYWQzMGY3M2YzMjAwY2I4MVvVjn4=: 00:23:49.029 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:49.029 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:49.029 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:49.029 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGMzMDcxMzBhOGE4Njg5MzVmZDJjNzQ3MzMyYmY1NGMyYzEyMTE3N2RlNjljOGNkYWQzMGY3M2YzMjAwY2I4MVvVjn4=: 00:23:49.029 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:49.029 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:23:49.029 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:49.029 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:49.029 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:49.029 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:49.029 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:49.029 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:49.029 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.029 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.029 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.029 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:49.029 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:49.029 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:49.029 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:49.029 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:49.029 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:49.029 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:49.029 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:49.029 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:49.029 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:49.029 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:49.029 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:49.029 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.029 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.288 nvme0n1 00:23:49.289 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.289 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:49.289 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:49.289 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.289 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.289 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.289 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:49.289 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:49.289 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.289 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.548 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.548 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:49.548 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:49.548 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:23:49.548 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:49.548 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:49.548 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:49.548 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:49.548 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWY3MzRjNmM3MGVhOGQ3NGI1YmE5NzZhMjFkNDkxNjJppkw/: 00:23:49.548 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzIxZjQxYjlmMjljMjczMDZjOWUwZmY5MWZmZjJkNjU1ZWEzZjQ5OTY0MjY2NmE1OTUwN2ExNGY4Y2U4ZWMwMHtMiAg=: 00:23:49.548 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:49.548 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:49.548 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWY3MzRjNmM3MGVhOGQ3NGI1YmE5NzZhMjFkNDkxNjJppkw/: 00:23:49.548 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzIxZjQxYjlmMjljMjczMDZjOWUwZmY5MWZmZjJkNjU1ZWEzZjQ5OTY0MjY2NmE1OTUwN2ExNGY4Y2U4ZWMwMHtMiAg=: ]] 00:23:49.548 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzIxZjQxYjlmMjljMjczMDZjOWUwZmY5MWZmZjJkNjU1ZWEzZjQ5OTY0MjY2NmE1OTUwN2ExNGY4Y2U4ZWMwMHtMiAg=: 00:23:49.548 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:23:49.548 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:49.548 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:49.548 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:49.548 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:49.548 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:49.548 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:49.548 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.548 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.548 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.548 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:49.548 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:49.548 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:49.548 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:49.548 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:49.548 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:49.548 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:49.548 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:49.548 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:49.548 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:49.548 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:49.548 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:49.548 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.548 00:07:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.116 nvme0n1 00:23:50.116 00:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:50.116 00:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:50.116 00:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.116 00:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.116 00:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:50.116 00:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:50.116 00:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:50.116 00:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:50.116 00:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.116 00:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.116 00:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:50.116 00:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:50.116 00:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:23:50.116 00:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:50.116 00:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:50.116 00:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:50.116 00:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:50.116 00:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTVhMDNiYWM2NjcyYzJiMWFlNGQ4ZWQ1ZmU0M2ExMTVjYzFhYjgwZDkzYzY0YWI0Dbuwtw==: 00:23:50.116 00:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWJiMGM2MjdkODQ0YzczMDI4YzBjYzJhNzdmNTNjNmQwY2VhYzFiNTdiODRmOGYzIS8ruw==: 00:23:50.116 00:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:50.116 00:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:50.116 00:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTVhMDNiYWM2NjcyYzJiMWFlNGQ4ZWQ1ZmU0M2ExMTVjYzFhYjgwZDkzYzY0YWI0Dbuwtw==: 00:23:50.116 00:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWJiMGM2MjdkODQ0YzczMDI4YzBjYzJhNzdmNTNjNmQwY2VhYzFiNTdiODRmOGYzIS8ruw==: ]] 00:23:50.116 00:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWJiMGM2MjdkODQ0YzczMDI4YzBjYzJhNzdmNTNjNmQwY2VhYzFiNTdiODRmOGYzIS8ruw==: 00:23:50.116 00:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:23:50.116 00:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:50.116 00:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:50.116 00:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:50.116 00:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:50.116 00:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:50.116 00:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:50.116 00:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.116 00:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.116 00:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:50.116 00:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:50.116 00:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:50.116 00:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:50.116 00:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:50.116 00:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:50.116 00:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:50.116 00:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:50.116 00:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:50.116 00:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:50.116 00:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:50.116 00:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:50.116 00:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:50.116 00:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.116 00:07:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.684 nvme0n1 00:23:50.685 00:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:50.685 00:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:50.685 00:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:50.685 00:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.685 00:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.685 00:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:50.685 00:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:50.685 00:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:50.685 00:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.685 00:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.685 00:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:50.685 00:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:50.685 00:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:23:50.685 00:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:50.685 00:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:50.685 00:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:50.685 00:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:50.685 00:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDJjZDAxZmNmODU1ODEwNWI5YzNiODFkYmI5Y2E5ZDkidGVq: 00:23:50.685 00:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDllMGYyNmVhMDliMjgzZDM1NWM3NTcyNjQ3OTVjMTWWjlWc: 00:23:50.685 00:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:50.685 00:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:50.685 00:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDJjZDAxZmNmODU1ODEwNWI5YzNiODFkYmI5Y2E5ZDkidGVq: 00:23:50.685 00:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDllMGYyNmVhMDliMjgzZDM1NWM3NTcyNjQ3OTVjMTWWjlWc: ]] 00:23:50.685 00:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDllMGYyNmVhMDliMjgzZDM1NWM3NTcyNjQ3OTVjMTWWjlWc: 00:23:50.685 00:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:23:50.685 00:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:50.685 00:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:50.685 00:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:50.685 00:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:50.685 00:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:50.685 00:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:50.685 00:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.685 00:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.685 00:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:50.685 00:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:50.685 00:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:50.685 00:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:50.685 00:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:50.685 00:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:50.685 00:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:50.685 00:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:50.685 00:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:50.685 00:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:50.685 00:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:50.685 00:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:50.685 00:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:50.685 00:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.685 00:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.254 nvme0n1 00:23:51.254 00:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.254 00:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:51.254 00:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:51.254 00:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.254 00:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.254 00:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.254 00:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:51.254 00:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:51.254 00:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.254 00:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.254 00:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.254 00:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:51.254 00:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:23:51.254 00:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:51.254 00:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:51.254 00:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:51.255 00:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:51.255 00:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2M1MTAwN2Y0YzdhMGQxYTI0ODg4MTg3ODZlNDhmYjgxYmY2MWI5ZGVhOTQwM2ZlukBKfg==: 00:23:51.255 00:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTI2NmE3ZjNjMzIyNmFlOGU2NjI4YzcwYzA2ZWNkNWTZSj/g: 00:23:51.255 00:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:51.255 00:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:51.255 00:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2M1MTAwN2Y0YzdhMGQxYTI0ODg4MTg3ODZlNDhmYjgxYmY2MWI5ZGVhOTQwM2ZlukBKfg==: 00:23:51.255 00:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTI2NmE3ZjNjMzIyNmFlOGU2NjI4YzcwYzA2ZWNkNWTZSj/g: ]] 00:23:51.255 00:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTI2NmE3ZjNjMzIyNmFlOGU2NjI4YzcwYzA2ZWNkNWTZSj/g: 00:23:51.255 00:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:23:51.255 00:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:51.255 00:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:51.255 00:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:51.255 00:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:51.255 00:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:51.255 00:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:51.255 00:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.255 00:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.255 00:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.255 00:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:51.255 00:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:51.255 00:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:51.255 00:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:51.255 00:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:51.255 00:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:51.255 00:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:51.255 00:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:51.255 00:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:51.255 00:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:51.255 00:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:51.255 00:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:51.255 00:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.255 00:07:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.825 nvme0n1 00:23:51.825 00:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.825 00:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:51.825 00:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:51.825 00:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.825 00:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.825 00:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.825 00:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:51.825 00:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:51.825 00:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.825 00:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.825 00:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.825 00:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:51.825 00:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:23:51.825 00:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:51.825 00:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:51.825 00:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:51.825 00:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:51.825 00:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGMzMDcxMzBhOGE4Njg5MzVmZDJjNzQ3MzMyYmY1NGMyYzEyMTE3N2RlNjljOGNkYWQzMGY3M2YzMjAwY2I4MVvVjn4=: 00:23:51.825 00:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:51.825 00:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:51.825 00:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:51.825 00:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGMzMDcxMzBhOGE4Njg5MzVmZDJjNzQ3MzMyYmY1NGMyYzEyMTE3N2RlNjljOGNkYWQzMGY3M2YzMjAwY2I4MVvVjn4=: 00:23:51.825 00:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:51.825 00:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:23:51.825 00:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:51.825 00:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:51.825 00:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:51.825 00:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:51.825 00:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:51.825 00:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:51.825 00:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.825 00:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.825 00:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.825 00:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:51.825 00:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:51.825 00:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:51.825 00:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:51.825 00:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:51.825 00:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:51.825 00:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:51.825 00:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:51.825 00:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:51.825 00:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:51.825 00:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:51.825 00:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:51.825 00:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.825 00:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.395 nvme0n1 00:23:52.395 00:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.395 00:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:52.395 00:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:52.395 00:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.395 00:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.395 00:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.395 00:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:52.395 00:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:52.395 00:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.395 00:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.395 00:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.395 00:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:23:52.395 00:07:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:52.395 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:52.395 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:23:52.395 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:52.395 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:52.395 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:52.395 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:52.395 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWY3MzRjNmM3MGVhOGQ3NGI1YmE5NzZhMjFkNDkxNjJppkw/: 00:23:52.395 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzIxZjQxYjlmMjljMjczMDZjOWUwZmY5MWZmZjJkNjU1ZWEzZjQ5OTY0MjY2NmE1OTUwN2ExNGY4Y2U4ZWMwMHtMiAg=: 00:23:52.395 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:52.395 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:52.395 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWY3MzRjNmM3MGVhOGQ3NGI1YmE5NzZhMjFkNDkxNjJppkw/: 00:23:52.395 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzIxZjQxYjlmMjljMjczMDZjOWUwZmY5MWZmZjJkNjU1ZWEzZjQ5OTY0MjY2NmE1OTUwN2ExNGY4Y2U4ZWMwMHtMiAg=: ]] 00:23:52.395 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzIxZjQxYjlmMjljMjczMDZjOWUwZmY5MWZmZjJkNjU1ZWEzZjQ5OTY0MjY2NmE1OTUwN2ExNGY4Y2U4ZWMwMHtMiAg=: 00:23:52.395 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:23:52.395 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:52.395 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:52.395 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:52.395 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:52.395 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:52.395 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:52.395 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.395 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.395 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.395 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:52.395 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:52.395 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:52.395 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:52.395 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:52.395 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:52.395 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:52.395 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:52.395 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:52.395 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:52.395 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:52.395 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:52.395 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.395 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.655 nvme0n1 00:23:52.655 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.655 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:52.655 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:52.655 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.655 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.655 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.656 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:52.656 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:52.656 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.656 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.656 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.656 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:52.656 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:23:52.656 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:52.656 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:52.656 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:52.656 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:52.656 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTVhMDNiYWM2NjcyYzJiMWFlNGQ4ZWQ1ZmU0M2ExMTVjYzFhYjgwZDkzYzY0YWI0Dbuwtw==: 00:23:52.656 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWJiMGM2MjdkODQ0YzczMDI4YzBjYzJhNzdmNTNjNmQwY2VhYzFiNTdiODRmOGYzIS8ruw==: 00:23:52.656 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:52.656 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:52.656 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTVhMDNiYWM2NjcyYzJiMWFlNGQ4ZWQ1ZmU0M2ExMTVjYzFhYjgwZDkzYzY0YWI0Dbuwtw==: 00:23:52.656 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWJiMGM2MjdkODQ0YzczMDI4YzBjYzJhNzdmNTNjNmQwY2VhYzFiNTdiODRmOGYzIS8ruw==: ]] 00:23:52.656 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWJiMGM2MjdkODQ0YzczMDI4YzBjYzJhNzdmNTNjNmQwY2VhYzFiNTdiODRmOGYzIS8ruw==: 00:23:52.656 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:23:52.656 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:52.656 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:52.656 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:52.656 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:52.656 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:52.656 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:52.656 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.656 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.656 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.656 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:52.656 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:52.656 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:52.656 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:52.656 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:52.656 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:52.656 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:52.656 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:52.656 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:52.656 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:52.656 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:52.656 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:52.656 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.656 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.656 nvme0n1 00:23:52.656 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.656 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:52.656 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:52.656 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.656 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.656 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.916 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:52.916 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:52.916 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.916 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.916 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.916 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:52.916 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:23:52.916 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:52.916 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:52.916 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:52.916 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:52.916 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDJjZDAxZmNmODU1ODEwNWI5YzNiODFkYmI5Y2E5ZDkidGVq: 00:23:52.916 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDllMGYyNmVhMDliMjgzZDM1NWM3NTcyNjQ3OTVjMTWWjlWc: 00:23:52.916 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:52.916 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:52.916 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDJjZDAxZmNmODU1ODEwNWI5YzNiODFkYmI5Y2E5ZDkidGVq: 00:23:52.916 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDllMGYyNmVhMDliMjgzZDM1NWM3NTcyNjQ3OTVjMTWWjlWc: ]] 00:23:52.916 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDllMGYyNmVhMDliMjgzZDM1NWM3NTcyNjQ3OTVjMTWWjlWc: 00:23:52.916 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:23:52.916 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:52.916 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:52.916 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:52.916 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:52.916 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:52.916 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:52.916 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.916 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.916 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.916 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:52.916 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:52.916 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:52.916 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:52.916 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:52.916 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:52.916 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:52.916 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:52.916 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:52.916 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:52.916 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:52.916 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:52.916 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.916 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.916 nvme0n1 00:23:52.916 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.916 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:52.916 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:52.916 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.916 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.916 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.916 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:52.916 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:52.916 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.916 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.916 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.916 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:52.916 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:23:52.916 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:52.916 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:52.916 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:52.916 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:52.916 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2M1MTAwN2Y0YzdhMGQxYTI0ODg4MTg3ODZlNDhmYjgxYmY2MWI5ZGVhOTQwM2ZlukBKfg==: 00:23:52.916 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTI2NmE3ZjNjMzIyNmFlOGU2NjI4YzcwYzA2ZWNkNWTZSj/g: 00:23:52.916 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:52.916 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:52.916 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2M1MTAwN2Y0YzdhMGQxYTI0ODg4MTg3ODZlNDhmYjgxYmY2MWI5ZGVhOTQwM2ZlukBKfg==: 00:23:52.916 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTI2NmE3ZjNjMzIyNmFlOGU2NjI4YzcwYzA2ZWNkNWTZSj/g: ]] 00:23:52.916 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTI2NmE3ZjNjMzIyNmFlOGU2NjI4YzcwYzA2ZWNkNWTZSj/g: 00:23:52.916 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:23:52.916 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:52.916 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:52.916 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:52.916 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:52.916 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:52.916 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:52.916 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.916 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.916 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.916 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:52.916 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:52.916 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:52.916 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:52.916 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:52.916 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:52.916 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:52.916 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:52.916 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:52.916 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:52.916 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:52.916 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:52.916 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.916 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.176 nvme0n1 00:23:53.176 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.176 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:53.176 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:53.176 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.176 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.176 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.176 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:53.176 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:53.176 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.176 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.176 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.176 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:53.176 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:23:53.176 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:53.176 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:53.176 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:53.176 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:53.176 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGMzMDcxMzBhOGE4Njg5MzVmZDJjNzQ3MzMyYmY1NGMyYzEyMTE3N2RlNjljOGNkYWQzMGY3M2YzMjAwY2I4MVvVjn4=: 00:23:53.176 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:53.176 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:53.176 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:53.176 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGMzMDcxMzBhOGE4Njg5MzVmZDJjNzQ3MzMyYmY1NGMyYzEyMTE3N2RlNjljOGNkYWQzMGY3M2YzMjAwY2I4MVvVjn4=: 00:23:53.176 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:53.176 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:23:53.176 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:53.176 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:53.176 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:53.176 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:53.176 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:53.176 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:53.176 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.176 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.176 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.176 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:53.176 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:53.176 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:53.176 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:53.176 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:53.176 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:53.176 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:53.176 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:53.176 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:53.176 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:53.176 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:53.176 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:53.176 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.176 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.436 nvme0n1 00:23:53.436 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.436 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:53.436 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:53.436 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.436 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.436 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.436 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:53.436 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:53.436 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.436 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.436 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.436 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:53.436 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:53.436 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:23:53.436 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:53.436 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:53.436 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:53.436 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:53.436 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWY3MzRjNmM3MGVhOGQ3NGI1YmE5NzZhMjFkNDkxNjJppkw/: 00:23:53.436 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzIxZjQxYjlmMjljMjczMDZjOWUwZmY5MWZmZjJkNjU1ZWEzZjQ5OTY0MjY2NmE1OTUwN2ExNGY4Y2U4ZWMwMHtMiAg=: 00:23:53.436 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:53.436 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:53.436 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWY3MzRjNmM3MGVhOGQ3NGI1YmE5NzZhMjFkNDkxNjJppkw/: 00:23:53.436 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzIxZjQxYjlmMjljMjczMDZjOWUwZmY5MWZmZjJkNjU1ZWEzZjQ5OTY0MjY2NmE1OTUwN2ExNGY4Y2U4ZWMwMHtMiAg=: ]] 00:23:53.436 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzIxZjQxYjlmMjljMjczMDZjOWUwZmY5MWZmZjJkNjU1ZWEzZjQ5OTY0MjY2NmE1OTUwN2ExNGY4Y2U4ZWMwMHtMiAg=: 00:23:53.436 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:23:53.436 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:53.436 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:53.436 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:53.436 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:53.436 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:53.436 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:53.436 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.436 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.436 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.436 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:53.436 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:53.436 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:53.436 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:53.436 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:53.436 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:53.436 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:53.436 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:53.436 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:53.436 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:53.436 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:53.436 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:53.436 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.436 00:07:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.436 nvme0n1 00:23:53.436 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.436 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:53.436 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:53.436 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.436 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.436 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.696 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:53.696 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:53.696 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.696 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.696 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.696 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:53.696 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:23:53.696 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:53.696 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:53.696 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:53.696 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:53.696 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTVhMDNiYWM2NjcyYzJiMWFlNGQ4ZWQ1ZmU0M2ExMTVjYzFhYjgwZDkzYzY0YWI0Dbuwtw==: 00:23:53.696 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWJiMGM2MjdkODQ0YzczMDI4YzBjYzJhNzdmNTNjNmQwY2VhYzFiNTdiODRmOGYzIS8ruw==: 00:23:53.696 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:53.696 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:53.696 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTVhMDNiYWM2NjcyYzJiMWFlNGQ4ZWQ1ZmU0M2ExMTVjYzFhYjgwZDkzYzY0YWI0Dbuwtw==: 00:23:53.696 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWJiMGM2MjdkODQ0YzczMDI4YzBjYzJhNzdmNTNjNmQwY2VhYzFiNTdiODRmOGYzIS8ruw==: ]] 00:23:53.696 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWJiMGM2MjdkODQ0YzczMDI4YzBjYzJhNzdmNTNjNmQwY2VhYzFiNTdiODRmOGYzIS8ruw==: 00:23:53.696 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:23:53.696 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:53.696 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:53.696 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:53.696 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:53.696 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:53.696 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:53.696 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.696 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.696 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.696 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:53.696 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:53.696 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:53.696 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:53.696 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:53.696 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:53.696 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:53.696 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:53.696 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:53.696 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:53.696 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:53.696 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:53.696 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.696 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.696 nvme0n1 00:23:53.696 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.696 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:53.696 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:53.696 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.696 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.696 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.696 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:53.696 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:53.696 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.696 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.696 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.696 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:53.696 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:23:53.696 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:53.696 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:53.697 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:53.697 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:53.697 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDJjZDAxZmNmODU1ODEwNWI5YzNiODFkYmI5Y2E5ZDkidGVq: 00:23:53.697 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDllMGYyNmVhMDliMjgzZDM1NWM3NTcyNjQ3OTVjMTWWjlWc: 00:23:53.697 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:53.697 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:53.697 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDJjZDAxZmNmODU1ODEwNWI5YzNiODFkYmI5Y2E5ZDkidGVq: 00:23:53.697 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDllMGYyNmVhMDliMjgzZDM1NWM3NTcyNjQ3OTVjMTWWjlWc: ]] 00:23:53.697 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDllMGYyNmVhMDliMjgzZDM1NWM3NTcyNjQ3OTVjMTWWjlWc: 00:23:53.697 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:23:53.697 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:53.697 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:53.697 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:53.697 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:53.697 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:53.697 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:53.697 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.697 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.697 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.697 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:53.697 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:53.697 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:53.697 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:53.697 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:53.697 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:53.697 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:53.697 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:53.697 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:53.697 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:53.697 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:53.697 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:53.697 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.697 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.956 nvme0n1 00:23:53.956 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.956 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:53.956 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:53.956 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.956 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.956 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.956 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:53.956 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:53.956 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.956 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.956 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.956 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:53.956 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:23:53.956 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:53.956 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:53.956 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:53.956 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:53.956 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2M1MTAwN2Y0YzdhMGQxYTI0ODg4MTg3ODZlNDhmYjgxYmY2MWI5ZGVhOTQwM2ZlukBKfg==: 00:23:53.956 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTI2NmE3ZjNjMzIyNmFlOGU2NjI4YzcwYzA2ZWNkNWTZSj/g: 00:23:53.956 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:53.956 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:53.957 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2M1MTAwN2Y0YzdhMGQxYTI0ODg4MTg3ODZlNDhmYjgxYmY2MWI5ZGVhOTQwM2ZlukBKfg==: 00:23:53.957 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTI2NmE3ZjNjMzIyNmFlOGU2NjI4YzcwYzA2ZWNkNWTZSj/g: ]] 00:23:53.957 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTI2NmE3ZjNjMzIyNmFlOGU2NjI4YzcwYzA2ZWNkNWTZSj/g: 00:23:53.957 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:23:53.957 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:53.957 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:53.957 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:53.957 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:53.957 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:53.957 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:53.957 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.957 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.957 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.957 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:53.957 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:53.957 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:53.957 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:53.957 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:53.957 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:53.957 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:53.957 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:53.957 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:53.957 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:53.957 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:53.957 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:53.957 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.957 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.217 nvme0n1 00:23:54.217 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.217 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:54.217 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.217 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.217 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:54.217 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.217 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:54.217 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:54.217 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.217 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.217 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.217 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:54.217 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:23:54.217 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:54.217 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:54.217 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:54.217 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:54.217 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGMzMDcxMzBhOGE4Njg5MzVmZDJjNzQ3MzMyYmY1NGMyYzEyMTE3N2RlNjljOGNkYWQzMGY3M2YzMjAwY2I4MVvVjn4=: 00:23:54.217 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:54.217 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:54.217 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:54.217 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGMzMDcxMzBhOGE4Njg5MzVmZDJjNzQ3MzMyYmY1NGMyYzEyMTE3N2RlNjljOGNkYWQzMGY3M2YzMjAwY2I4MVvVjn4=: 00:23:54.217 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:54.217 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:23:54.217 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:54.217 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:54.217 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:54.217 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:54.217 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:54.217 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:54.217 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.217 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.217 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.217 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:54.217 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:54.217 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:54.217 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:54.217 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:54.217 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:54.217 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:54.217 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:54.217 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:54.217 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:54.217 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:54.217 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:54.217 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.217 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.217 nvme0n1 00:23:54.217 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.217 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:54.217 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:54.217 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.217 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.217 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.476 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:54.476 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:54.476 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.476 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.476 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.476 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:54.476 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:54.476 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:23:54.476 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:54.476 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:54.476 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:54.476 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:54.476 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWY3MzRjNmM3MGVhOGQ3NGI1YmE5NzZhMjFkNDkxNjJppkw/: 00:23:54.476 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzIxZjQxYjlmMjljMjczMDZjOWUwZmY5MWZmZjJkNjU1ZWEzZjQ5OTY0MjY2NmE1OTUwN2ExNGY4Y2U4ZWMwMHtMiAg=: 00:23:54.476 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:54.476 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:54.476 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWY3MzRjNmM3MGVhOGQ3NGI1YmE5NzZhMjFkNDkxNjJppkw/: 00:23:54.476 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzIxZjQxYjlmMjljMjczMDZjOWUwZmY5MWZmZjJkNjU1ZWEzZjQ5OTY0MjY2NmE1OTUwN2ExNGY4Y2U4ZWMwMHtMiAg=: ]] 00:23:54.476 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzIxZjQxYjlmMjljMjczMDZjOWUwZmY5MWZmZjJkNjU1ZWEzZjQ5OTY0MjY2NmE1OTUwN2ExNGY4Y2U4ZWMwMHtMiAg=: 00:23:54.476 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:23:54.476 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:54.476 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:54.476 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:54.476 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:54.476 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:54.477 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:54.477 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.477 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.477 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.477 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:54.477 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:54.477 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:54.477 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:54.477 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:54.477 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:54.477 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:54.477 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:54.477 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:54.477 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:54.477 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:54.477 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:54.477 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.477 00:08:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.477 nvme0n1 00:23:54.477 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.477 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:54.477 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.477 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:54.477 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.477 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.735 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:54.735 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:54.735 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.735 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.735 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.735 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:54.735 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:23:54.735 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:54.735 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:54.735 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:54.735 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:54.735 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTVhMDNiYWM2NjcyYzJiMWFlNGQ4ZWQ1ZmU0M2ExMTVjYzFhYjgwZDkzYzY0YWI0Dbuwtw==: 00:23:54.735 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWJiMGM2MjdkODQ0YzczMDI4YzBjYzJhNzdmNTNjNmQwY2VhYzFiNTdiODRmOGYzIS8ruw==: 00:23:54.735 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:54.735 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:54.735 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTVhMDNiYWM2NjcyYzJiMWFlNGQ4ZWQ1ZmU0M2ExMTVjYzFhYjgwZDkzYzY0YWI0Dbuwtw==: 00:23:54.735 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWJiMGM2MjdkODQ0YzczMDI4YzBjYzJhNzdmNTNjNmQwY2VhYzFiNTdiODRmOGYzIS8ruw==: ]] 00:23:54.735 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWJiMGM2MjdkODQ0YzczMDI4YzBjYzJhNzdmNTNjNmQwY2VhYzFiNTdiODRmOGYzIS8ruw==: 00:23:54.735 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:23:54.735 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:54.735 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:54.735 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:54.735 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:54.735 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:54.735 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:54.735 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.735 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.735 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.735 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:54.735 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:54.735 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:54.735 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:54.735 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:54.735 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:54.735 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:54.735 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:54.735 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:54.735 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:54.735 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:54.735 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:54.735 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.735 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.735 nvme0n1 00:23:54.735 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.735 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:54.735 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:54.735 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.735 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.993 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.993 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:54.993 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:54.993 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.993 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.993 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.993 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:54.993 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:23:54.993 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:54.993 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:54.993 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:54.993 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:54.993 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDJjZDAxZmNmODU1ODEwNWI5YzNiODFkYmI5Y2E5ZDkidGVq: 00:23:54.993 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDllMGYyNmVhMDliMjgzZDM1NWM3NTcyNjQ3OTVjMTWWjlWc: 00:23:54.993 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:54.993 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:54.993 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDJjZDAxZmNmODU1ODEwNWI5YzNiODFkYmI5Y2E5ZDkidGVq: 00:23:54.993 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDllMGYyNmVhMDliMjgzZDM1NWM3NTcyNjQ3OTVjMTWWjlWc: ]] 00:23:54.993 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDllMGYyNmVhMDliMjgzZDM1NWM3NTcyNjQ3OTVjMTWWjlWc: 00:23:54.993 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:23:54.993 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:54.993 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:54.993 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:54.993 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:54.993 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:54.993 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:54.993 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.993 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.993 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.993 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:54.993 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:54.993 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:54.993 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:54.993 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:54.993 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:54.993 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:54.993 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:54.993 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:54.993 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:54.993 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:54.993 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:54.993 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.993 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.993 nvme0n1 00:23:54.993 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.252 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:55.252 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.252 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.252 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:55.252 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.252 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:55.252 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:55.252 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.252 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.252 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.252 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:55.252 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:23:55.252 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:55.252 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:55.252 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:55.252 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:55.252 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2M1MTAwN2Y0YzdhMGQxYTI0ODg4MTg3ODZlNDhmYjgxYmY2MWI5ZGVhOTQwM2ZlukBKfg==: 00:23:55.252 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTI2NmE3ZjNjMzIyNmFlOGU2NjI4YzcwYzA2ZWNkNWTZSj/g: 00:23:55.252 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:55.252 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:55.252 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2M1MTAwN2Y0YzdhMGQxYTI0ODg4MTg3ODZlNDhmYjgxYmY2MWI5ZGVhOTQwM2ZlukBKfg==: 00:23:55.252 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTI2NmE3ZjNjMzIyNmFlOGU2NjI4YzcwYzA2ZWNkNWTZSj/g: ]] 00:23:55.253 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTI2NmE3ZjNjMzIyNmFlOGU2NjI4YzcwYzA2ZWNkNWTZSj/g: 00:23:55.253 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:23:55.253 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:55.253 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:55.253 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:55.253 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:55.253 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:55.253 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:55.253 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.253 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.253 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.253 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:55.253 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:55.253 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:55.253 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:55.253 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:55.253 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:55.253 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:55.253 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:55.253 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:55.253 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:55.253 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:55.253 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:55.253 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.253 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.511 nvme0n1 00:23:55.512 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.512 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:55.512 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:55.512 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.512 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.512 00:08:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.512 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:55.512 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:55.512 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.512 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.512 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.512 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:55.512 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:23:55.512 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:55.512 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:55.512 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:55.512 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:55.512 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGMzMDcxMzBhOGE4Njg5MzVmZDJjNzQ3MzMyYmY1NGMyYzEyMTE3N2RlNjljOGNkYWQzMGY3M2YzMjAwY2I4MVvVjn4=: 00:23:55.512 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:55.512 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:55.512 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:55.512 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGMzMDcxMzBhOGE4Njg5MzVmZDJjNzQ3MzMyYmY1NGMyYzEyMTE3N2RlNjljOGNkYWQzMGY3M2YzMjAwY2I4MVvVjn4=: 00:23:55.512 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:55.512 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:23:55.512 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:55.512 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:55.512 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:55.512 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:55.512 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:55.512 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:55.512 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.512 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.512 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.512 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:55.512 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:55.512 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:55.512 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:55.512 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:55.512 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:55.512 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:55.512 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:55.512 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:55.512 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:55.512 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:55.512 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:55.512 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.512 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.771 nvme0n1 00:23:55.771 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.771 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:55.771 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.771 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.771 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:55.771 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.771 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:55.771 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:55.771 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.771 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.771 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.771 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:55.771 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:55.771 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:23:55.771 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:55.771 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:55.771 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:55.771 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:55.771 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWY3MzRjNmM3MGVhOGQ3NGI1YmE5NzZhMjFkNDkxNjJppkw/: 00:23:55.771 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzIxZjQxYjlmMjljMjczMDZjOWUwZmY5MWZmZjJkNjU1ZWEzZjQ5OTY0MjY2NmE1OTUwN2ExNGY4Y2U4ZWMwMHtMiAg=: 00:23:55.771 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:55.771 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:55.771 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWY3MzRjNmM3MGVhOGQ3NGI1YmE5NzZhMjFkNDkxNjJppkw/: 00:23:55.771 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzIxZjQxYjlmMjljMjczMDZjOWUwZmY5MWZmZjJkNjU1ZWEzZjQ5OTY0MjY2NmE1OTUwN2ExNGY4Y2U4ZWMwMHtMiAg=: ]] 00:23:55.771 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzIxZjQxYjlmMjljMjczMDZjOWUwZmY5MWZmZjJkNjU1ZWEzZjQ5OTY0MjY2NmE1OTUwN2ExNGY4Y2U4ZWMwMHtMiAg=: 00:23:55.771 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:23:55.771 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:55.771 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:55.771 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:55.771 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:55.771 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:55.771 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:55.771 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.771 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.771 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.771 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:55.771 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:55.771 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:55.771 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:55.771 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:55.771 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:55.771 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:55.771 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:55.771 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:55.771 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:55.771 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:55.771 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:55.771 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.771 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.030 nvme0n1 00:23:56.030 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.030 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:56.031 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:56.031 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.031 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.031 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.031 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:56.031 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:56.031 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.031 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.031 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.031 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:56.031 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:23:56.031 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:56.031 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:56.031 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:56.031 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:56.031 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTVhMDNiYWM2NjcyYzJiMWFlNGQ4ZWQ1ZmU0M2ExMTVjYzFhYjgwZDkzYzY0YWI0Dbuwtw==: 00:23:56.031 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWJiMGM2MjdkODQ0YzczMDI4YzBjYzJhNzdmNTNjNmQwY2VhYzFiNTdiODRmOGYzIS8ruw==: 00:23:56.031 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:56.031 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:56.031 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTVhMDNiYWM2NjcyYzJiMWFlNGQ4ZWQ1ZmU0M2ExMTVjYzFhYjgwZDkzYzY0YWI0Dbuwtw==: 00:23:56.031 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWJiMGM2MjdkODQ0YzczMDI4YzBjYzJhNzdmNTNjNmQwY2VhYzFiNTdiODRmOGYzIS8ruw==: ]] 00:23:56.031 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWJiMGM2MjdkODQ0YzczMDI4YzBjYzJhNzdmNTNjNmQwY2VhYzFiNTdiODRmOGYzIS8ruw==: 00:23:56.031 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:23:56.031 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:56.031 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:56.031 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:56.031 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:56.031 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:56.031 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:56.031 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.031 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.031 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.031 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:56.031 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:56.031 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:56.031 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:56.031 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:56.031 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:56.031 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:56.031 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:56.031 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:56.031 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:56.031 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:56.031 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:56.031 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.031 00:08:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.600 nvme0n1 00:23:56.600 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.600 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:56.600 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.600 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.600 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:56.600 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.600 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:56.600 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:56.600 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.600 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.600 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.600 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:56.600 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:23:56.600 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:56.600 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:56.600 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:56.600 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:56.600 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDJjZDAxZmNmODU1ODEwNWI5YzNiODFkYmI5Y2E5ZDkidGVq: 00:23:56.600 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDllMGYyNmVhMDliMjgzZDM1NWM3NTcyNjQ3OTVjMTWWjlWc: 00:23:56.600 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:56.600 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:56.600 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDJjZDAxZmNmODU1ODEwNWI5YzNiODFkYmI5Y2E5ZDkidGVq: 00:23:56.600 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDllMGYyNmVhMDliMjgzZDM1NWM3NTcyNjQ3OTVjMTWWjlWc: ]] 00:23:56.600 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDllMGYyNmVhMDliMjgzZDM1NWM3NTcyNjQ3OTVjMTWWjlWc: 00:23:56.600 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:23:56.600 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:56.600 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:56.600 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:56.600 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:56.600 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:56.600 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:56.600 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.600 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.600 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.600 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:56.600 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:56.600 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:56.600 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:56.600 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:56.600 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:56.600 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:56.600 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:56.600 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:56.600 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:56.600 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:56.600 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:56.600 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.600 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.859 nvme0n1 00:23:56.859 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.859 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:56.859 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:56.859 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.859 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.859 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.859 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:56.859 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:56.859 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.859 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.859 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.859 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:56.859 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:23:56.859 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:56.859 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:56.859 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:56.859 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:56.859 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2M1MTAwN2Y0YzdhMGQxYTI0ODg4MTg3ODZlNDhmYjgxYmY2MWI5ZGVhOTQwM2ZlukBKfg==: 00:23:56.859 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTI2NmE3ZjNjMzIyNmFlOGU2NjI4YzcwYzA2ZWNkNWTZSj/g: 00:23:56.859 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:56.859 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:56.859 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2M1MTAwN2Y0YzdhMGQxYTI0ODg4MTg3ODZlNDhmYjgxYmY2MWI5ZGVhOTQwM2ZlukBKfg==: 00:23:56.859 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTI2NmE3ZjNjMzIyNmFlOGU2NjI4YzcwYzA2ZWNkNWTZSj/g: ]] 00:23:56.859 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTI2NmE3ZjNjMzIyNmFlOGU2NjI4YzcwYzA2ZWNkNWTZSj/g: 00:23:56.859 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:23:56.859 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:56.859 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:56.859 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:56.859 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:56.859 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:56.859 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:56.859 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.859 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.859 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.859 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:56.859 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:56.859 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:56.859 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:56.859 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:56.859 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:56.859 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:56.859 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:56.859 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:56.859 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:56.859 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:56.860 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:56.860 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.860 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.428 nvme0n1 00:23:57.428 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.428 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:57.428 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:57.428 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.428 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.428 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.428 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:57.428 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:57.428 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.428 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.428 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.428 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:57.428 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:23:57.428 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:57.428 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:57.428 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:57.428 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:57.428 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGMzMDcxMzBhOGE4Njg5MzVmZDJjNzQ3MzMyYmY1NGMyYzEyMTE3N2RlNjljOGNkYWQzMGY3M2YzMjAwY2I4MVvVjn4=: 00:23:57.428 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:57.428 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:57.428 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:57.428 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGMzMDcxMzBhOGE4Njg5MzVmZDJjNzQ3MzMyYmY1NGMyYzEyMTE3N2RlNjljOGNkYWQzMGY3M2YzMjAwY2I4MVvVjn4=: 00:23:57.428 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:57.428 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:23:57.428 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:57.428 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:57.428 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:57.428 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:57.428 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:57.428 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:57.428 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.428 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.428 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.428 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:57.428 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:57.428 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:57.428 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:57.428 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:57.428 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:57.428 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:57.428 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:57.428 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:57.428 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:57.428 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:57.428 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:57.428 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.428 00:08:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.687 nvme0n1 00:23:57.687 00:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.688 00:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:57.688 00:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:57.688 00:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.688 00:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.688 00:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.688 00:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:57.688 00:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:57.688 00:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.688 00:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.688 00:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.688 00:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:57.688 00:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:57.688 00:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:23:57.688 00:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:57.688 00:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:57.688 00:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:57.688 00:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:57.688 00:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWY3MzRjNmM3MGVhOGQ3NGI1YmE5NzZhMjFkNDkxNjJppkw/: 00:23:57.688 00:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzIxZjQxYjlmMjljMjczMDZjOWUwZmY5MWZmZjJkNjU1ZWEzZjQ5OTY0MjY2NmE1OTUwN2ExNGY4Y2U4ZWMwMHtMiAg=: 00:23:57.688 00:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:57.688 00:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:57.688 00:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWY3MzRjNmM3MGVhOGQ3NGI1YmE5NzZhMjFkNDkxNjJppkw/: 00:23:57.688 00:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzIxZjQxYjlmMjljMjczMDZjOWUwZmY5MWZmZjJkNjU1ZWEzZjQ5OTY0MjY2NmE1OTUwN2ExNGY4Y2U4ZWMwMHtMiAg=: ]] 00:23:57.688 00:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzIxZjQxYjlmMjljMjczMDZjOWUwZmY5MWZmZjJkNjU1ZWEzZjQ5OTY0MjY2NmE1OTUwN2ExNGY4Y2U4ZWMwMHtMiAg=: 00:23:57.688 00:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:23:57.688 00:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:57.688 00:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:57.688 00:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:57.688 00:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:57.688 00:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:57.688 00:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:57.688 00:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.688 00:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.688 00:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.688 00:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:57.688 00:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:57.688 00:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:57.688 00:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:57.688 00:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:57.688 00:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:57.688 00:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:57.688 00:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:57.688 00:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:57.688 00:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:57.688 00:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:57.688 00:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:57.688 00:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.688 00:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.257 nvme0n1 00:23:58.257 00:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.257 00:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:58.257 00:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.257 00:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:58.257 00:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.257 00:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.257 00:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:58.257 00:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:58.257 00:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.257 00:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.257 00:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.257 00:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:58.257 00:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:23:58.257 00:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:58.257 00:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:58.257 00:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:58.257 00:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:58.257 00:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTVhMDNiYWM2NjcyYzJiMWFlNGQ4ZWQ1ZmU0M2ExMTVjYzFhYjgwZDkzYzY0YWI0Dbuwtw==: 00:23:58.258 00:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWJiMGM2MjdkODQ0YzczMDI4YzBjYzJhNzdmNTNjNmQwY2VhYzFiNTdiODRmOGYzIS8ruw==: 00:23:58.258 00:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:58.258 00:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:58.258 00:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTVhMDNiYWM2NjcyYzJiMWFlNGQ4ZWQ1ZmU0M2ExMTVjYzFhYjgwZDkzYzY0YWI0Dbuwtw==: 00:23:58.258 00:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWJiMGM2MjdkODQ0YzczMDI4YzBjYzJhNzdmNTNjNmQwY2VhYzFiNTdiODRmOGYzIS8ruw==: ]] 00:23:58.258 00:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWJiMGM2MjdkODQ0YzczMDI4YzBjYzJhNzdmNTNjNmQwY2VhYzFiNTdiODRmOGYzIS8ruw==: 00:23:58.258 00:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:23:58.258 00:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:58.258 00:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:58.258 00:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:58.258 00:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:58.258 00:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:58.258 00:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:58.258 00:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.258 00:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.258 00:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.258 00:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:58.258 00:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:58.258 00:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:58.258 00:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:58.258 00:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:58.258 00:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:58.258 00:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:58.258 00:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:58.258 00:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:58.258 00:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:58.258 00:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:58.258 00:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:58.258 00:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.258 00:08:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.826 nvme0n1 00:23:58.826 00:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.826 00:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:58.826 00:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:58.826 00:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.826 00:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.826 00:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.086 00:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:59.086 00:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:59.086 00:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.086 00:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.086 00:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.086 00:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:59.086 00:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:23:59.086 00:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:59.086 00:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:59.086 00:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:59.086 00:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:59.086 00:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDJjZDAxZmNmODU1ODEwNWI5YzNiODFkYmI5Y2E5ZDkidGVq: 00:23:59.086 00:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDllMGYyNmVhMDliMjgzZDM1NWM3NTcyNjQ3OTVjMTWWjlWc: 00:23:59.086 00:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:59.086 00:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:59.086 00:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDJjZDAxZmNmODU1ODEwNWI5YzNiODFkYmI5Y2E5ZDkidGVq: 00:23:59.086 00:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDllMGYyNmVhMDliMjgzZDM1NWM3NTcyNjQ3OTVjMTWWjlWc: ]] 00:23:59.086 00:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDllMGYyNmVhMDliMjgzZDM1NWM3NTcyNjQ3OTVjMTWWjlWc: 00:23:59.086 00:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:23:59.086 00:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:59.086 00:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:59.086 00:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:59.086 00:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:59.086 00:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:59.086 00:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:59.086 00:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.086 00:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.086 00:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.086 00:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:59.086 00:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:59.086 00:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:59.086 00:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:59.086 00:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:59.086 00:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:59.086 00:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:59.086 00:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:59.086 00:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:59.086 00:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:59.086 00:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:59.086 00:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:59.086 00:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.086 00:08:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.654 nvme0n1 00:23:59.654 00:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.654 00:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:59.654 00:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.654 00:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:59.654 00:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.654 00:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.654 00:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:59.654 00:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:59.654 00:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.654 00:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.654 00:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.654 00:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:59.654 00:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:23:59.654 00:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:59.654 00:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:59.654 00:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:59.654 00:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:59.654 00:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2M1MTAwN2Y0YzdhMGQxYTI0ODg4MTg3ODZlNDhmYjgxYmY2MWI5ZGVhOTQwM2ZlukBKfg==: 00:23:59.654 00:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTI2NmE3ZjNjMzIyNmFlOGU2NjI4YzcwYzA2ZWNkNWTZSj/g: 00:23:59.654 00:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:59.655 00:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:59.655 00:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2M1MTAwN2Y0YzdhMGQxYTI0ODg4MTg3ODZlNDhmYjgxYmY2MWI5ZGVhOTQwM2ZlukBKfg==: 00:23:59.655 00:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTI2NmE3ZjNjMzIyNmFlOGU2NjI4YzcwYzA2ZWNkNWTZSj/g: ]] 00:23:59.655 00:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTI2NmE3ZjNjMzIyNmFlOGU2NjI4YzcwYzA2ZWNkNWTZSj/g: 00:23:59.655 00:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:23:59.655 00:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:59.655 00:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:59.655 00:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:59.655 00:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:59.655 00:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:59.655 00:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:59.655 00:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.655 00:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.655 00:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.655 00:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:59.655 00:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:59.655 00:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:59.655 00:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:59.655 00:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:59.655 00:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:59.655 00:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:59.655 00:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:59.655 00:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:59.655 00:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:59.655 00:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:59.655 00:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:59.655 00:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.655 00:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.223 nvme0n1 00:24:00.223 00:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.223 00:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:00.223 00:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.223 00:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:00.223 00:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.223 00:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.223 00:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:00.223 00:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:00.223 00:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.223 00:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.223 00:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.223 00:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:00.223 00:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:24:00.223 00:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:00.223 00:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:00.223 00:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:00.223 00:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:00.223 00:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGMzMDcxMzBhOGE4Njg5MzVmZDJjNzQ3MzMyYmY1NGMyYzEyMTE3N2RlNjljOGNkYWQzMGY3M2YzMjAwY2I4MVvVjn4=: 00:24:00.223 00:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:00.223 00:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:00.223 00:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:00.223 00:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGMzMDcxMzBhOGE4Njg5MzVmZDJjNzQ3MzMyYmY1NGMyYzEyMTE3N2RlNjljOGNkYWQzMGY3M2YzMjAwY2I4MVvVjn4=: 00:24:00.223 00:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:00.223 00:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:24:00.223 00:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:00.223 00:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:00.223 00:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:00.223 00:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:00.223 00:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:00.223 00:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:00.223 00:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.223 00:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.223 00:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.223 00:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:00.224 00:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:00.224 00:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:00.224 00:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:00.224 00:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:00.224 00:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:00.224 00:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:00.224 00:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:00.224 00:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:00.224 00:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:00.224 00:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:00.224 00:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:00.224 00:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.224 00:08:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.793 nvme0n1 00:24:00.793 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.793 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:00.793 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.793 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.793 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:00.793 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.793 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:00.793 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:00.793 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.793 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.793 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.793 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:00.793 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:00.793 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:00.793 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:00.793 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:00.793 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTVhMDNiYWM2NjcyYzJiMWFlNGQ4ZWQ1ZmU0M2ExMTVjYzFhYjgwZDkzYzY0YWI0Dbuwtw==: 00:24:00.793 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWJiMGM2MjdkODQ0YzczMDI4YzBjYzJhNzdmNTNjNmQwY2VhYzFiNTdiODRmOGYzIS8ruw==: 00:24:00.793 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:00.793 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:00.793 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTVhMDNiYWM2NjcyYzJiMWFlNGQ4ZWQ1ZmU0M2ExMTVjYzFhYjgwZDkzYzY0YWI0Dbuwtw==: 00:24:00.793 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWJiMGM2MjdkODQ0YzczMDI4YzBjYzJhNzdmNTNjNmQwY2VhYzFiNTdiODRmOGYzIS8ruw==: ]] 00:24:00.793 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWJiMGM2MjdkODQ0YzczMDI4YzBjYzJhNzdmNTNjNmQwY2VhYzFiNTdiODRmOGYzIS8ruw==: 00:24:00.793 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:00.793 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.793 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.793 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.793 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:24:00.793 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:00.793 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:00.793 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:00.793 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:00.793 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:00.793 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:00.793 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:00.793 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:00.793 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:00.793 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:00.793 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:00.793 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:24:00.793 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:00.793 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:00.793 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:00.793 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:00.793 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:00.793 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:00.793 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.793 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.793 request: 00:24:00.793 { 00:24:00.793 "name": "nvme0", 00:24:00.793 "trtype": "tcp", 00:24:00.793 "traddr": "10.0.0.1", 00:24:00.793 "adrfam": "ipv4", 00:24:00.793 "trsvcid": "4420", 00:24:00.793 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:24:00.793 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:24:00.793 "prchk_reftag": false, 00:24:00.793 "prchk_guard": false, 00:24:00.793 "hdgst": false, 00:24:00.793 "ddgst": false, 00:24:00.793 "allow_unrecognized_csi": false, 00:24:00.793 "method": "bdev_nvme_attach_controller", 00:24:00.793 "req_id": 1 00:24:00.793 } 00:24:00.793 Got JSON-RPC error response 00:24:00.793 response: 00:24:00.793 { 00:24:00.793 "code": -5, 00:24:00.793 "message": "Input/output error" 00:24:00.793 } 00:24:00.793 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:00.793 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:24:00.793 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:00.793 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:00.793 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:00.793 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:24:00.793 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:24:00.793 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.793 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.793 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.793 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:24:00.793 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:24:00.793 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:00.793 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:00.793 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:00.793 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:00.793 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:00.793 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:00.793 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:00.793 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:00.793 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:00.793 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:00.793 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:00.793 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:24:00.793 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:00.793 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:00.794 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:00.794 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:00.794 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:00.794 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:00.794 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.794 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.053 request: 00:24:01.053 { 00:24:01.053 "name": "nvme0", 00:24:01.053 "trtype": "tcp", 00:24:01.053 "traddr": "10.0.0.1", 00:24:01.053 "adrfam": "ipv4", 00:24:01.053 "trsvcid": "4420", 00:24:01.053 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:24:01.053 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:24:01.053 "prchk_reftag": false, 00:24:01.053 "prchk_guard": false, 00:24:01.053 "hdgst": false, 00:24:01.053 "ddgst": false, 00:24:01.053 "dhchap_key": "key2", 00:24:01.053 "allow_unrecognized_csi": false, 00:24:01.053 "method": "bdev_nvme_attach_controller", 00:24:01.053 "req_id": 1 00:24:01.053 } 00:24:01.053 Got JSON-RPC error response 00:24:01.053 response: 00:24:01.053 { 00:24:01.053 "code": -5, 00:24:01.053 "message": "Input/output error" 00:24:01.053 } 00:24:01.053 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:01.053 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:24:01.053 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:01.053 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:01.053 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:01.053 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:24:01.053 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.053 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.053 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:24:01.053 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.053 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:24:01.053 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:24:01.054 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:01.054 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:01.054 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:01.054 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:01.054 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:01.054 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:01.054 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:01.054 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:01.054 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:01.054 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:01.054 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:01.054 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:24:01.054 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:01.054 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:01.054 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:01.054 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:01.054 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:01.054 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:01.054 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.054 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.054 request: 00:24:01.054 { 00:24:01.054 "name": "nvme0", 00:24:01.054 "trtype": "tcp", 00:24:01.054 "traddr": "10.0.0.1", 00:24:01.054 "adrfam": "ipv4", 00:24:01.054 "trsvcid": "4420", 00:24:01.054 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:24:01.054 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:24:01.054 "prchk_reftag": false, 00:24:01.054 "prchk_guard": false, 00:24:01.054 "hdgst": false, 00:24:01.054 "ddgst": false, 00:24:01.054 "dhchap_key": "key1", 00:24:01.054 "dhchap_ctrlr_key": "ckey2", 00:24:01.054 "allow_unrecognized_csi": false, 00:24:01.054 "method": "bdev_nvme_attach_controller", 00:24:01.054 "req_id": 1 00:24:01.054 } 00:24:01.054 Got JSON-RPC error response 00:24:01.054 response: 00:24:01.054 { 00:24:01.054 "code": -5, 00:24:01.054 "message": "Input/output error" 00:24:01.054 } 00:24:01.054 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:01.054 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:24:01.054 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:01.054 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:01.054 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:01.054 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:24:01.054 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:01.054 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:01.054 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:01.054 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:01.054 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:01.054 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:01.054 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:01.054 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:01.054 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:01.054 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:01.054 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:24:01.054 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.054 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.054 nvme0n1 00:24:01.054 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.054 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:24:01.054 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:01.054 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:01.054 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:01.054 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:01.054 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDJjZDAxZmNmODU1ODEwNWI5YzNiODFkYmI5Y2E5ZDkidGVq: 00:24:01.054 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDllMGYyNmVhMDliMjgzZDM1NWM3NTcyNjQ3OTVjMTWWjlWc: 00:24:01.054 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:01.054 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:01.054 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDJjZDAxZmNmODU1ODEwNWI5YzNiODFkYmI5Y2E5ZDkidGVq: 00:24:01.054 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDllMGYyNmVhMDliMjgzZDM1NWM3NTcyNjQ3OTVjMTWWjlWc: ]] 00:24:01.054 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDllMGYyNmVhMDliMjgzZDM1NWM3NTcyNjQ3OTVjMTWWjlWc: 00:24:01.054 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:01.054 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.054 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.313 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.313 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:24:01.313 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.313 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.313 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:24:01.313 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.313 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:01.313 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:01.313 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:24:01.313 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:01.313 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:01.313 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:01.313 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:01.313 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:01.313 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:01.313 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.313 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.313 request: 00:24:01.313 { 00:24:01.313 "name": "nvme0", 00:24:01.313 "dhchap_key": "key1", 00:24:01.313 "dhchap_ctrlr_key": "ckey2", 00:24:01.313 "method": "bdev_nvme_set_keys", 00:24:01.313 "req_id": 1 00:24:01.313 } 00:24:01.313 Got JSON-RPC error response 00:24:01.313 response: 00:24:01.313 { 00:24:01.313 "code": -13, 00:24:01.313 "message": "Permission denied" 00:24:01.313 } 00:24:01.313 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:01.313 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:24:01.314 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:01.314 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:01.314 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:01.314 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:24:01.314 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:24:01.314 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.314 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.314 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.314 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:24:01.314 00:08:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:24:02.250 00:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:24:02.250 00:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:24:02.250 00:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.250 00:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.250 00:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.510 00:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:24:02.510 00:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:02.510 00:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:02.510 00:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:02.510 00:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:02.510 00:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:02.510 00:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTVhMDNiYWM2NjcyYzJiMWFlNGQ4ZWQ1ZmU0M2ExMTVjYzFhYjgwZDkzYzY0YWI0Dbuwtw==: 00:24:02.510 00:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWJiMGM2MjdkODQ0YzczMDI4YzBjYzJhNzdmNTNjNmQwY2VhYzFiNTdiODRmOGYzIS8ruw==: 00:24:02.510 00:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:02.510 00:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:02.510 00:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTVhMDNiYWM2NjcyYzJiMWFlNGQ4ZWQ1ZmU0M2ExMTVjYzFhYjgwZDkzYzY0YWI0Dbuwtw==: 00:24:02.510 00:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWJiMGM2MjdkODQ0YzczMDI4YzBjYzJhNzdmNTNjNmQwY2VhYzFiNTdiODRmOGYzIS8ruw==: ]] 00:24:02.510 00:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWJiMGM2MjdkODQ0YzczMDI4YzBjYzJhNzdmNTNjNmQwY2VhYzFiNTdiODRmOGYzIS8ruw==: 00:24:02.510 00:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:24:02.510 00:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:02.510 00:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:02.510 00:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:02.510 00:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:02.510 00:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:02.510 00:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:02.510 00:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:02.510 00:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:02.510 00:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:02.510 00:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:02.510 00:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:24:02.510 00:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.510 00:08:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.510 nvme0n1 00:24:02.510 00:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.510 00:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:24:02.510 00:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:02.510 00:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:02.510 00:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:02.510 00:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:02.510 00:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDJjZDAxZmNmODU1ODEwNWI5YzNiODFkYmI5Y2E5ZDkidGVq: 00:24:02.510 00:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDllMGYyNmVhMDliMjgzZDM1NWM3NTcyNjQ3OTVjMTWWjlWc: 00:24:02.510 00:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:02.510 00:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:02.510 00:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDJjZDAxZmNmODU1ODEwNWI5YzNiODFkYmI5Y2E5ZDkidGVq: 00:24:02.510 00:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDllMGYyNmVhMDliMjgzZDM1NWM3NTcyNjQ3OTVjMTWWjlWc: ]] 00:24:02.510 00:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDllMGYyNmVhMDliMjgzZDM1NWM3NTcyNjQ3OTVjMTWWjlWc: 00:24:02.510 00:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:24:02.510 00:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:24:02.510 00:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:24:02.510 00:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:02.510 00:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:02.510 00:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:02.510 00:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:02.510 00:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:24:02.510 00:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.510 00:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.510 request: 00:24:02.510 { 00:24:02.510 "name": "nvme0", 00:24:02.510 "dhchap_key": "key2", 00:24:02.510 "dhchap_ctrlr_key": "ckey1", 00:24:02.510 "method": "bdev_nvme_set_keys", 00:24:02.510 "req_id": 1 00:24:02.510 } 00:24:02.510 Got JSON-RPC error response 00:24:02.510 response: 00:24:02.510 { 00:24:02.510 "code": -13, 00:24:02.510 "message": "Permission denied" 00:24:02.510 } 00:24:02.510 00:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:02.510 00:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:24:02.510 00:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:02.510 00:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:02.510 00:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:02.510 00:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:24:02.510 00:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.510 00:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:24:02.510 00:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.510 00:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.510 00:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:24:02.510 00:08:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:24:03.887 00:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:24:03.887 00:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:24:03.887 00:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.887 00:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.887 00:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.887 00:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:24:03.887 00:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:24:03.887 00:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:24:03.888 00:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:24:03.888 00:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:03.888 00:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:24:03.888 00:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:03.888 00:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:24:03.888 00:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:03.888 00:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:03.888 rmmod nvme_tcp 00:24:03.888 rmmod nvme_fabrics 00:24:03.888 00:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:03.888 00:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:24:03.888 00:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:24:03.888 00:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 84359 ']' 00:24:03.888 00:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 84359 00:24:03.888 00:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 84359 ']' 00:24:03.888 00:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 84359 00:24:03.888 00:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:24:03.888 00:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:03.888 00:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84359 00:24:03.888 killing process with pid 84359 00:24:03.888 00:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:03.888 00:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:03.888 00:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84359' 00:24:03.888 00:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 84359 00:24:03.888 00:08:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 84359 00:24:04.456 00:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:04.456 00:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:04.456 00:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:04.456 00:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:24:04.456 00:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:04.457 00:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:24:04.457 00:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:24:04.457 00:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:04.457 00:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:24:04.457 00:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:24:04.457 00:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:24:04.457 00:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:24:04.457 00:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:24:04.457 00:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:24:04.457 00:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:24:04.457 00:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:24:04.457 00:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:24:04.457 00:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:24:04.457 00:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:24:04.716 00:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:24:04.716 00:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:04.716 00:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:04.716 00:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:24:04.716 00:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:04.716 00:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:04.716 00:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:04.716 00:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@300 -- # return 0 00:24:04.716 00:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:24:04.716 00:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:04.716 00:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:24:04.716 00:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:24:04.716 00:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:24:04.716 00:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:04.716 00:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:04.716 00:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:04.716 00:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:04.716 00:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:24:04.716 00:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:24:04.716 00:08:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:24:05.285 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:05.545 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:24:05.545 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:24:05.545 00:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.ekb /tmp/spdk.key-null.FFA /tmp/spdk.key-sha256.1sj /tmp/spdk.key-sha384.eUc /tmp/spdk.key-sha512.GxZ /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:24:05.545 00:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:24:06.114 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:06.114 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:24:06.114 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:24:06.114 00:24:06.114 real 0m36.859s 00:24:06.114 user 0m33.687s 00:24:06.114 sys 0m4.083s 00:24:06.114 ************************************ 00:24:06.114 END TEST nvmf_auth_host 00:24:06.114 ************************************ 00:24:06.114 00:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:06.114 00:08:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.114 00:08:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:24:06.114 00:08:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:24:06.114 00:08:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:06.114 00:08:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:06.114 00:08:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.114 ************************************ 00:24:06.114 START TEST nvmf_digest 00:24:06.114 ************************************ 00:24:06.114 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:24:06.114 * Looking for test storage... 00:24:06.114 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:06.114 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:06.114 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:06.114 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lcov --version 00:24:06.375 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:06.375 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:06.375 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:06.375 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:06.375 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:24:06.375 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:24:06.375 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:24:06.375 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:24:06.375 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:24:06.375 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:24:06.375 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:24:06.375 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:06.375 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:24:06.375 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:24:06.375 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:06.375 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:06.375 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:24:06.375 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:24:06.375 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:06.375 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:24:06.375 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:24:06.375 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:24:06.375 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:24:06.375 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:06.375 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:24:06.375 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:24:06.375 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:06.375 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:06.375 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:24:06.375 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:06.375 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:06.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:06.375 --rc genhtml_branch_coverage=1 00:24:06.375 --rc genhtml_function_coverage=1 00:24:06.375 --rc genhtml_legend=1 00:24:06.375 --rc geninfo_all_blocks=1 00:24:06.375 --rc geninfo_unexecuted_blocks=1 00:24:06.375 00:24:06.375 ' 00:24:06.376 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:06.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:06.376 --rc genhtml_branch_coverage=1 00:24:06.376 --rc genhtml_function_coverage=1 00:24:06.376 --rc genhtml_legend=1 00:24:06.376 --rc geninfo_all_blocks=1 00:24:06.376 --rc geninfo_unexecuted_blocks=1 00:24:06.376 00:24:06.376 ' 00:24:06.376 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:06.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:06.376 --rc genhtml_branch_coverage=1 00:24:06.376 --rc genhtml_function_coverage=1 00:24:06.376 --rc genhtml_legend=1 00:24:06.376 --rc geninfo_all_blocks=1 00:24:06.376 --rc geninfo_unexecuted_blocks=1 00:24:06.376 00:24:06.376 ' 00:24:06.376 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:06.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:06.376 --rc genhtml_branch_coverage=1 00:24:06.376 --rc genhtml_function_coverage=1 00:24:06.376 --rc genhtml_legend=1 00:24:06.376 --rc geninfo_all_blocks=1 00:24:06.376 --rc geninfo_unexecuted_blocks=1 00:24:06.376 00:24:06.376 ' 00:24:06.376 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:06.376 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:24:06.376 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:06.376 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:06.376 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:06.376 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:06.376 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:06.376 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:06.376 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:06.376 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:06.376 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:06.376 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:06.376 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:24:06.376 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:24:06.376 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:06.376 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:06.376 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:06.376 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:06.376 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:06.376 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:24:06.376 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:06.376 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:06.376 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:06.376 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.376 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.376 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.376 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:24:06.376 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.376 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:24:06.376 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:06.376 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:06.376 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:06.376 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:06.376 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:06.376 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:06.376 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:06.376 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:06.376 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:06.376 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:06.376 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:24:06.376 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:24:06.376 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:24:06.376 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:24:06.376 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:24:06.376 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:06.376 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:06.376 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:06.376 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:06.376 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:06.376 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:06.376 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:06.376 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:06.376 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:24:06.376 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:24:06.376 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:24:06.376 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:24:06.376 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:24:06.376 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@460 -- # nvmf_veth_init 00:24:06.376 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:06.376 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:24:06.376 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:24:06.376 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:24:06.376 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:06.376 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:24:06.376 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:06.376 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:24:06.376 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:06.376 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:24:06.376 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:06.376 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:06.376 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:06.376 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:06.376 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:06.376 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:06.376 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:24:06.376 Cannot find device "nvmf_init_br" 00:24:06.376 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # true 00:24:06.376 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:24:06.376 Cannot find device "nvmf_init_br2" 00:24:06.376 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # true 00:24:06.376 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:24:06.376 Cannot find device "nvmf_tgt_br" 00:24:06.376 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # true 00:24:06.376 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:24:06.376 Cannot find device "nvmf_tgt_br2" 00:24:06.376 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # true 00:24:06.377 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:24:06.377 Cannot find device "nvmf_init_br" 00:24:06.377 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # true 00:24:06.377 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:24:06.377 Cannot find device "nvmf_init_br2" 00:24:06.377 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # true 00:24:06.377 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:24:06.377 Cannot find device "nvmf_tgt_br" 00:24:06.377 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # true 00:24:06.377 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:24:06.377 Cannot find device "nvmf_tgt_br2" 00:24:06.377 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # true 00:24:06.377 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:24:06.377 Cannot find device "nvmf_br" 00:24:06.377 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # true 00:24:06.377 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:24:06.377 Cannot find device "nvmf_init_if" 00:24:06.377 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # true 00:24:06.377 00:08:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:24:06.377 Cannot find device "nvmf_init_if2" 00:24:06.377 00:08:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # true 00:24:06.377 00:08:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:06.377 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:06.377 00:08:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # true 00:24:06.377 00:08:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:06.377 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:06.377 00:08:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # true 00:24:06.377 00:08:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:24:06.377 00:08:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:06.377 00:08:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:24:06.377 00:08:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:06.377 00:08:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:06.377 00:08:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:06.637 00:08:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:06.637 00:08:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:06.637 00:08:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:24:06.637 00:08:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:24:06.637 00:08:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:24:06.637 00:08:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:24:06.637 00:08:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:24:06.637 00:08:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:24:06.637 00:08:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:24:06.637 00:08:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:24:06.637 00:08:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:24:06.637 00:08:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:06.637 00:08:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:06.637 00:08:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:06.637 00:08:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:24:06.637 00:08:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:24:06.637 00:08:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:24:06.637 00:08:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:24:06.637 00:08:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:06.637 00:08:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:06.637 00:08:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:06.637 00:08:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:24:06.637 00:08:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:24:06.637 00:08:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:24:06.637 00:08:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:06.637 00:08:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:24:06.637 00:08:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:24:06.637 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:06.637 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:24:06.637 00:24:06.637 --- 10.0.0.3 ping statistics --- 00:24:06.637 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:06.637 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:24:06.637 00:08:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:24:06.637 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:24:06.637 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.054 ms 00:24:06.637 00:24:06.637 --- 10.0.0.4 ping statistics --- 00:24:06.637 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:06.637 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:24:06.637 00:08:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:06.637 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:06.637 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:24:06.637 00:24:06.637 --- 10.0.0.1 ping statistics --- 00:24:06.637 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:06.637 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:24:06.637 00:08:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:24:06.637 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:06.637 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:24:06.637 00:24:06.637 --- 10.0.0.2 ping statistics --- 00:24:06.637 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:06.637 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:24:06.637 00:08:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:06.637 00:08:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@461 -- # return 0 00:24:06.637 00:08:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:06.637 00:08:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:06.637 00:08:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:06.637 00:08:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:06.637 00:08:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:06.637 00:08:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:06.637 00:08:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:06.637 00:08:13 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:24:06.637 00:08:13 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:24:06.637 00:08:13 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:24:06.637 00:08:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:06.637 00:08:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:06.637 00:08:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:06.637 ************************************ 00:24:06.637 START TEST nvmf_digest_clean 00:24:06.637 ************************************ 00:24:06.637 00:08:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:24:06.637 00:08:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:24:06.637 00:08:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:24:06.637 00:08:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:24:06.637 00:08:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:24:06.637 00:08:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:24:06.637 00:08:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:06.637 00:08:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:06.637 00:08:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:06.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:06.637 00:08:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=86004 00:24:06.637 00:08:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 86004 00:24:06.637 00:08:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 86004 ']' 00:24:06.637 00:08:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:06.637 00:08:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:24:06.637 00:08:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:06.637 00:08:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:06.637 00:08:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:06.637 00:08:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:06.897 [2024-11-19 00:08:13.434754] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:24:06.897 [2024-11-19 00:08:13.434921] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:07.156 [2024-11-19 00:08:13.627577] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:07.156 [2024-11-19 00:08:13.751532] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:07.156 [2024-11-19 00:08:13.751881] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:07.156 [2024-11-19 00:08:13.751924] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:07.156 [2024-11-19 00:08:13.751957] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:07.156 [2024-11-19 00:08:13.751979] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:07.156 [2024-11-19 00:08:13.753449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:08.094 00:08:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:08.094 00:08:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:24:08.094 00:08:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:08.094 00:08:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:08.094 00:08:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:08.094 00:08:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:08.094 00:08:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:24:08.094 00:08:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:24:08.094 00:08:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:24:08.094 00:08:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.094 00:08:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:08.094 [2024-11-19 00:08:14.630488] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:08.094 null0 00:24:08.094 [2024-11-19 00:08:14.725263] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:08.094 [2024-11-19 00:08:14.749424] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:08.094 00:08:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.094 00:08:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:24:08.094 00:08:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:08.094 00:08:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:08.094 00:08:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:24:08.094 00:08:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:24:08.094 00:08:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:24:08.094 00:08:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:08.094 00:08:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=86036 00:24:08.095 00:08:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:24:08.095 00:08:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 86036 /var/tmp/bperf.sock 00:24:08.095 00:08:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 86036 ']' 00:24:08.095 00:08:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:08.095 00:08:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:08.095 00:08:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:08.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:08.095 00:08:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:08.095 00:08:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:08.354 [2024-11-19 00:08:14.866785] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:24:08.354 [2024-11-19 00:08:14.866954] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86036 ] 00:24:08.614 [2024-11-19 00:08:15.050290] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:08.614 [2024-11-19 00:08:15.173202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:09.184 00:08:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:09.184 00:08:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:24:09.184 00:08:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:09.184 00:08:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:09.184 00:08:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:09.752 [2024-11-19 00:08:16.171626] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:09.752 00:08:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:09.752 00:08:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:10.012 nvme0n1 00:24:10.012 00:08:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:10.012 00:08:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:10.012 Running I/O for 2 seconds... 00:24:12.326 14605.00 IOPS, 57.05 MiB/s [2024-11-19T00:08:19.018Z] 14795.50 IOPS, 57.79 MiB/s 00:24:12.326 Latency(us) 00:24:12.326 [2024-11-19T00:08:19.018Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:12.326 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:24:12.326 nvme0n1 : 2.01 14784.06 57.75 0.00 0.00 8650.91 8102.63 27167.65 00:24:12.326 [2024-11-19T00:08:19.018Z] =================================================================================================================== 00:24:12.326 [2024-11-19T00:08:19.018Z] Total : 14784.06 57.75 0.00 0.00 8650.91 8102.63 27167.65 00:24:12.326 { 00:24:12.326 "results": [ 00:24:12.326 { 00:24:12.326 "job": "nvme0n1", 00:24:12.326 "core_mask": "0x2", 00:24:12.326 "workload": "randread", 00:24:12.326 "status": "finished", 00:24:12.326 "queue_depth": 128, 00:24:12.326 "io_size": 4096, 00:24:12.326 "runtime": 2.010205, 00:24:12.326 "iops": 14784.064311848791, 00:24:12.326 "mibps": 57.75025121815934, 00:24:12.326 "io_failed": 0, 00:24:12.326 "io_timeout": 0, 00:24:12.326 "avg_latency_us": 8650.90576178692, 00:24:12.326 "min_latency_us": 8102.632727272728, 00:24:12.326 "max_latency_us": 27167.65090909091 00:24:12.326 } 00:24:12.326 ], 00:24:12.326 "core_count": 1 00:24:12.326 } 00:24:12.327 00:08:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:12.327 00:08:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:12.327 00:08:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:12.327 00:08:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:12.327 00:08:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:12.327 | select(.opcode=="crc32c") 00:24:12.327 | "\(.module_name) \(.executed)"' 00:24:12.327 00:08:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:12.327 00:08:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:12.327 00:08:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:12.327 00:08:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:12.327 00:08:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 86036 00:24:12.327 00:08:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 86036 ']' 00:24:12.327 00:08:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 86036 00:24:12.327 00:08:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:24:12.327 00:08:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:12.327 00:08:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86036 00:24:12.327 killing process with pid 86036 00:24:12.327 Received shutdown signal, test time was about 2.000000 seconds 00:24:12.327 00:24:12.327 Latency(us) 00:24:12.327 [2024-11-19T00:08:19.019Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:12.327 [2024-11-19T00:08:19.019Z] =================================================================================================================== 00:24:12.327 [2024-11-19T00:08:19.019Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:12.327 00:08:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:12.327 00:08:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:12.327 00:08:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86036' 00:24:12.327 00:08:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 86036 00:24:12.327 00:08:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 86036 00:24:13.282 00:08:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:24:13.282 00:08:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:13.282 00:08:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:13.282 00:08:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:24:13.282 00:08:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:24:13.282 00:08:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:24:13.283 00:08:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:13.283 00:08:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=86103 00:24:13.283 00:08:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:24:13.283 00:08:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 86103 /var/tmp/bperf.sock 00:24:13.283 00:08:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 86103 ']' 00:24:13.283 00:08:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:13.283 00:08:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:13.283 00:08:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:13.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:13.283 00:08:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:13.283 00:08:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:13.283 [2024-11-19 00:08:19.835823] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:24:13.283 [2024-11-19 00:08:19.836218] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86103 ] 00:24:13.283 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:13.283 Zero copy mechanism will not be used. 00:24:13.630 [2024-11-19 00:08:20.013618] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:13.630 [2024-11-19 00:08:20.096671] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:14.199 00:08:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:14.199 00:08:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:24:14.199 00:08:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:14.199 00:08:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:14.199 00:08:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:14.459 [2024-11-19 00:08:21.125208] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:14.718 00:08:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:14.718 00:08:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:14.978 nvme0n1 00:24:14.978 00:08:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:14.978 00:08:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:14.978 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:14.978 Zero copy mechanism will not be used. 00:24:14.978 Running I/O for 2 seconds... 00:24:17.289 7488.00 IOPS, 936.00 MiB/s [2024-11-19T00:08:23.981Z] 7440.00 IOPS, 930.00 MiB/s 00:24:17.289 Latency(us) 00:24:17.289 [2024-11-19T00:08:23.981Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:17.289 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:24:17.289 nvme0n1 : 2.00 7441.08 930.13 0.00 0.00 2146.84 1966.08 3589.59 00:24:17.289 [2024-11-19T00:08:23.981Z] =================================================================================================================== 00:24:17.289 [2024-11-19T00:08:23.981Z] Total : 7441.08 930.13 0.00 0.00 2146.84 1966.08 3589.59 00:24:17.289 { 00:24:17.289 "results": [ 00:24:17.289 { 00:24:17.289 "job": "nvme0n1", 00:24:17.289 "core_mask": "0x2", 00:24:17.289 "workload": "randread", 00:24:17.289 "status": "finished", 00:24:17.289 "queue_depth": 16, 00:24:17.289 "io_size": 131072, 00:24:17.289 "runtime": 2.001861, 00:24:17.289 "iops": 7441.076078708761, 00:24:17.289 "mibps": 930.1345098385951, 00:24:17.289 "io_failed": 0, 00:24:17.289 "io_timeout": 0, 00:24:17.289 "avg_latency_us": 2146.841347524656, 00:24:17.289 "min_latency_us": 1966.08, 00:24:17.289 "max_latency_us": 3589.5854545454545 00:24:17.289 } 00:24:17.289 ], 00:24:17.289 "core_count": 1 00:24:17.289 } 00:24:17.289 00:08:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:17.289 00:08:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:17.289 00:08:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:17.290 00:08:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:17.290 | select(.opcode=="crc32c") 00:24:17.290 | "\(.module_name) \(.executed)"' 00:24:17.290 00:08:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:17.290 00:08:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:17.290 00:08:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:17.290 00:08:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:17.290 00:08:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:17.290 00:08:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 86103 00:24:17.290 00:08:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 86103 ']' 00:24:17.290 00:08:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 86103 00:24:17.290 00:08:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:24:17.290 00:08:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:17.290 00:08:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86103 00:24:17.290 killing process with pid 86103 00:24:17.290 Received shutdown signal, test time was about 2.000000 seconds 00:24:17.290 00:24:17.290 Latency(us) 00:24:17.290 [2024-11-19T00:08:23.982Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:17.290 [2024-11-19T00:08:23.982Z] =================================================================================================================== 00:24:17.290 [2024-11-19T00:08:23.982Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:17.290 00:08:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:17.290 00:08:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:17.290 00:08:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86103' 00:24:17.290 00:08:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 86103 00:24:17.290 00:08:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 86103 00:24:18.226 00:08:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:24:18.226 00:08:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:18.226 00:08:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:18.226 00:08:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:24:18.226 00:08:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:24:18.226 00:08:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:24:18.226 00:08:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:18.226 00:08:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=86171 00:24:18.226 00:08:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:24:18.226 00:08:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 86171 /var/tmp/bperf.sock 00:24:18.226 00:08:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 86171 ']' 00:24:18.226 00:08:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:18.226 00:08:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:18.226 00:08:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:18.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:18.226 00:08:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:18.226 00:08:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:18.226 [2024-11-19 00:08:24.844357] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:24:18.226 [2024-11-19 00:08:24.844519] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86171 ] 00:24:18.485 [2024-11-19 00:08:25.022707] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:18.485 [2024-11-19 00:08:25.102324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:19.419 00:08:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:19.419 00:08:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:24:19.419 00:08:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:19.419 00:08:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:19.419 00:08:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:19.419 [2024-11-19 00:08:26.105186] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:19.678 00:08:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:19.678 00:08:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:19.937 nvme0n1 00:24:19.937 00:08:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:19.937 00:08:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:20.196 Running I/O for 2 seconds... 00:24:22.067 15876.00 IOPS, 62.02 MiB/s [2024-11-19T00:08:28.759Z] 15844.00 IOPS, 61.89 MiB/s 00:24:22.067 Latency(us) 00:24:22.067 [2024-11-19T00:08:28.759Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:22.067 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:22.067 nvme0n1 : 2.01 15831.57 61.84 0.00 0.00 8078.13 5332.25 17754.30 00:24:22.067 [2024-11-19T00:08:28.759Z] =================================================================================================================== 00:24:22.067 [2024-11-19T00:08:28.759Z] Total : 15831.57 61.84 0.00 0.00 8078.13 5332.25 17754.30 00:24:22.067 { 00:24:22.067 "results": [ 00:24:22.067 { 00:24:22.067 "job": "nvme0n1", 00:24:22.067 "core_mask": "0x2", 00:24:22.067 "workload": "randwrite", 00:24:22.067 "status": "finished", 00:24:22.067 "queue_depth": 128, 00:24:22.067 "io_size": 4096, 00:24:22.067 "runtime": 2.009655, 00:24:22.067 "iops": 15831.573080951706, 00:24:22.067 "mibps": 61.8420823474676, 00:24:22.067 "io_failed": 0, 00:24:22.067 "io_timeout": 0, 00:24:22.067 "avg_latency_us": 8078.13041351407, 00:24:22.067 "min_latency_us": 5332.2472727272725, 00:24:22.067 "max_latency_us": 17754.298181818183 00:24:22.067 } 00:24:22.067 ], 00:24:22.067 "core_count": 1 00:24:22.067 } 00:24:22.067 00:08:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:22.067 00:08:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:22.067 00:08:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:22.067 00:08:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:22.067 00:08:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:22.067 | select(.opcode=="crc32c") 00:24:22.067 | "\(.module_name) \(.executed)"' 00:24:22.326 00:08:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:22.326 00:08:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:22.326 00:08:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:22.326 00:08:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:22.326 00:08:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 86171 00:24:22.326 00:08:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 86171 ']' 00:24:22.326 00:08:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 86171 00:24:22.326 00:08:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:24:22.326 00:08:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:22.326 00:08:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86171 00:24:22.326 killing process with pid 86171 00:24:22.326 Received shutdown signal, test time was about 2.000000 seconds 00:24:22.326 00:24:22.326 Latency(us) 00:24:22.326 [2024-11-19T00:08:29.018Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:22.326 [2024-11-19T00:08:29.018Z] =================================================================================================================== 00:24:22.326 [2024-11-19T00:08:29.018Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:22.326 00:08:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:22.326 00:08:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:22.326 00:08:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86171' 00:24:22.326 00:08:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 86171 00:24:22.326 00:08:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 86171 00:24:23.262 00:08:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:24:23.262 00:08:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:23.262 00:08:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:23.262 00:08:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:24:23.262 00:08:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:24:23.262 00:08:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:24:23.262 00:08:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:23.262 00:08:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=86238 00:24:23.262 00:08:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:24:23.262 00:08:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 86238 /var/tmp/bperf.sock 00:24:23.262 00:08:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 86238 ']' 00:24:23.262 00:08:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:23.262 00:08:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:23.262 00:08:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:23.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:23.262 00:08:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:23.262 00:08:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:23.262 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:23.262 Zero copy mechanism will not be used. 00:24:23.262 [2024-11-19 00:08:29.846818] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:24:23.262 [2024-11-19 00:08:29.846982] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86238 ] 00:24:23.520 [2024-11-19 00:08:30.023445] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:23.521 [2024-11-19 00:08:30.105258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:24.458 00:08:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:24.458 00:08:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:24:24.458 00:08:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:24.458 00:08:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:24.458 00:08:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:24.458 [2024-11-19 00:08:31.135389] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:24.717 00:08:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:24.717 00:08:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:24.976 nvme0n1 00:24:24.976 00:08:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:24.976 00:08:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:25.234 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:25.234 Zero copy mechanism will not be used. 00:24:25.234 Running I/O for 2 seconds... 00:24:27.106 5795.00 IOPS, 724.38 MiB/s [2024-11-19T00:08:33.798Z] 5760.00 IOPS, 720.00 MiB/s 00:24:27.106 Latency(us) 00:24:27.106 [2024-11-19T00:08:33.798Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:27.106 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:24:27.106 nvme0n1 : 2.00 5756.23 719.53 0.00 0.00 2772.81 1683.08 4438.57 00:24:27.106 [2024-11-19T00:08:33.798Z] =================================================================================================================== 00:24:27.106 [2024-11-19T00:08:33.798Z] Total : 5756.23 719.53 0.00 0.00 2772.81 1683.08 4438.57 00:24:27.106 { 00:24:27.106 "results": [ 00:24:27.106 { 00:24:27.106 "job": "nvme0n1", 00:24:27.106 "core_mask": "0x2", 00:24:27.106 "workload": "randwrite", 00:24:27.106 "status": "finished", 00:24:27.106 "queue_depth": 16, 00:24:27.106 "io_size": 131072, 00:24:27.106 "runtime": 2.004089, 00:24:27.106 "iops": 5756.231384933503, 00:24:27.106 "mibps": 719.5289231166879, 00:24:27.106 "io_failed": 0, 00:24:27.106 "io_timeout": 0, 00:24:27.106 "avg_latency_us": 2772.807625772286, 00:24:27.106 "min_latency_us": 1683.0836363636363, 00:24:27.106 "max_latency_us": 4438.574545454546 00:24:27.106 } 00:24:27.106 ], 00:24:27.106 "core_count": 1 00:24:27.106 } 00:24:27.106 00:08:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:27.106 00:08:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:27.106 00:08:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:27.106 00:08:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:27.106 | select(.opcode=="crc32c") 00:24:27.106 | "\(.module_name) \(.executed)"' 00:24:27.106 00:08:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:27.365 00:08:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:27.365 00:08:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:27.365 00:08:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:27.365 00:08:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:27.365 00:08:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 86238 00:24:27.365 00:08:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 86238 ']' 00:24:27.365 00:08:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 86238 00:24:27.365 00:08:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:24:27.365 00:08:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:27.365 00:08:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86238 00:24:27.365 killing process with pid 86238 00:24:27.365 Received shutdown signal, test time was about 2.000000 seconds 00:24:27.365 00:24:27.365 Latency(us) 00:24:27.365 [2024-11-19T00:08:34.057Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:27.365 [2024-11-19T00:08:34.057Z] =================================================================================================================== 00:24:27.365 [2024-11-19T00:08:34.057Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:27.366 00:08:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:27.366 00:08:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:27.366 00:08:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86238' 00:24:27.366 00:08:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 86238 00:24:27.366 00:08:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 86238 00:24:28.302 00:08:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 86004 00:24:28.302 00:08:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 86004 ']' 00:24:28.303 00:08:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 86004 00:24:28.303 00:08:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:24:28.303 00:08:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:28.303 00:08:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86004 00:24:28.303 killing process with pid 86004 00:24:28.303 00:08:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:28.303 00:08:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:28.303 00:08:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86004' 00:24:28.303 00:08:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 86004 00:24:28.303 00:08:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 86004 00:24:29.239 ************************************ 00:24:29.239 END TEST nvmf_digest_clean 00:24:29.239 ************************************ 00:24:29.239 00:24:29.239 real 0m22.333s 00:24:29.239 user 0m43.078s 00:24:29.239 sys 0m4.483s 00:24:29.239 00:08:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:29.239 00:08:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:29.239 00:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:24:29.239 00:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:29.239 00:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:29.239 00:08:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:29.239 ************************************ 00:24:29.239 START TEST nvmf_digest_error 00:24:29.239 ************************************ 00:24:29.239 00:08:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:24:29.239 00:08:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:24:29.239 00:08:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:29.239 00:08:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:29.239 00:08:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:29.239 00:08:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=86341 00:24:29.239 00:08:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 86341 00:24:29.239 00:08:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:24:29.239 00:08:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 86341 ']' 00:24:29.239 00:08:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:29.239 00:08:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:29.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:29.239 00:08:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:29.239 00:08:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:29.239 00:08:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:29.239 [2024-11-19 00:08:35.815655] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:24:29.240 [2024-11-19 00:08:35.816420] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:29.499 [2024-11-19 00:08:35.995368] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:29.499 [2024-11-19 00:08:36.074571] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:29.499 [2024-11-19 00:08:36.074639] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:29.499 [2024-11-19 00:08:36.074657] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:29.499 [2024-11-19 00:08:36.074678] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:29.499 [2024-11-19 00:08:36.074690] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:29.499 [2024-11-19 00:08:36.075684] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:30.067 00:08:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:30.067 00:08:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:24:30.067 00:08:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:30.067 00:08:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:30.067 00:08:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:30.067 00:08:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:30.067 00:08:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:24:30.067 00:08:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.067 00:08:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:30.067 [2024-11-19 00:08:36.740419] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:24:30.067 00:08:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.067 00:08:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:24:30.067 00:08:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:24:30.067 00:08:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.067 00:08:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:30.326 [2024-11-19 00:08:36.892444] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:30.326 null0 00:24:30.326 [2024-11-19 00:08:36.987518] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:30.326 [2024-11-19 00:08:37.011811] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:30.586 00:08:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.586 00:08:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:24:30.586 00:08:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:30.586 00:08:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:24:30.586 00:08:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:24:30.586 00:08:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:24:30.586 00:08:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=86373 00:24:30.586 00:08:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 86373 /var/tmp/bperf.sock 00:24:30.586 00:08:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:24:30.586 00:08:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 86373 ']' 00:24:30.586 00:08:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:30.586 00:08:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:30.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:30.586 00:08:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:30.586 00:08:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:30.586 00:08:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:30.586 [2024-11-19 00:08:37.127485] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:24:30.586 [2024-11-19 00:08:37.127668] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86373 ] 00:24:30.845 [2024-11-19 00:08:37.313740] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:30.845 [2024-11-19 00:08:37.435933] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:31.104 [2024-11-19 00:08:37.618955] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:31.671 00:08:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:31.671 00:08:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:24:31.671 00:08:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:31.671 00:08:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:31.671 00:08:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:31.671 00:08:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.671 00:08:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:31.671 00:08:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.671 00:08:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:31.671 00:08:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:32.240 nvme0n1 00:24:32.240 00:08:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:24:32.240 00:08:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.240 00:08:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:32.240 00:08:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.241 00:08:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:32.241 00:08:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:32.241 Running I/O for 2 seconds... 00:24:32.241 [2024-11-19 00:08:38.803938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.241 [2024-11-19 00:08:38.804009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19591 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.241 [2024-11-19 00:08:38.804031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.241 [2024-11-19 00:08:38.821414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.241 [2024-11-19 00:08:38.821471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1852 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.241 [2024-11-19 00:08:38.821492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.241 [2024-11-19 00:08:38.838563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.241 [2024-11-19 00:08:38.838636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23166 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.241 [2024-11-19 00:08:38.838655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.241 [2024-11-19 00:08:38.855728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.241 [2024-11-19 00:08:38.855786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9706 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.241 [2024-11-19 00:08:38.855806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.241 [2024-11-19 00:08:38.873545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.241 [2024-11-19 00:08:38.873587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13472 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.241 [2024-11-19 00:08:38.873634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.241 [2024-11-19 00:08:38.891272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.241 [2024-11-19 00:08:38.891321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.241 [2024-11-19 00:08:38.891339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.241 [2024-11-19 00:08:38.908560] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.241 [2024-11-19 00:08:38.908642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.241 [2024-11-19 00:08:38.908682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.241 [2024-11-19 00:08:38.926311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.241 [2024-11-19 00:08:38.926386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24833 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.241 [2024-11-19 00:08:38.926408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.500 [2024-11-19 00:08:38.946666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.501 [2024-11-19 00:08:38.946733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:19191 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.501 [2024-11-19 00:08:38.946752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.501 [2024-11-19 00:08:38.966297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.501 [2024-11-19 00:08:38.966355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:23392 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.501 [2024-11-19 00:08:38.966376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.501 [2024-11-19 00:08:38.984681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.501 [2024-11-19 00:08:38.984743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:14784 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.501 [2024-11-19 00:08:38.984761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.501 [2024-11-19 00:08:39.002974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.501 [2024-11-19 00:08:39.003017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:6311 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.501 [2024-11-19 00:08:39.003037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.501 [2024-11-19 00:08:39.021578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.501 [2024-11-19 00:08:39.021649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:24122 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.501 [2024-11-19 00:08:39.021667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.501 [2024-11-19 00:08:39.039782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.501 [2024-11-19 00:08:39.039848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:12456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.501 [2024-11-19 00:08:39.039867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.501 [2024-11-19 00:08:39.058214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.501 [2024-11-19 00:08:39.058272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:5520 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.501 [2024-11-19 00:08:39.058292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.501 [2024-11-19 00:08:39.076126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.501 [2024-11-19 00:08:39.076189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:11794 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.501 [2024-11-19 00:08:39.076207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.501 [2024-11-19 00:08:39.094375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.501 [2024-11-19 00:08:39.094433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:3205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.501 [2024-11-19 00:08:39.094456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.501 [2024-11-19 00:08:39.112685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.501 [2024-11-19 00:08:39.112741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:5029 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.501 [2024-11-19 00:08:39.112761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.501 [2024-11-19 00:08:39.130737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.501 [2024-11-19 00:08:39.130801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:15906 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.501 [2024-11-19 00:08:39.130819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.501 [2024-11-19 00:08:39.149034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.501 [2024-11-19 00:08:39.149092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:4193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.501 [2024-11-19 00:08:39.149112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.501 [2024-11-19 00:08:39.167648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.501 [2024-11-19 00:08:39.167705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:3430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.501 [2024-11-19 00:08:39.167725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.501 [2024-11-19 00:08:39.186135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.501 [2024-11-19 00:08:39.186197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:1714 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.501 [2024-11-19 00:08:39.186214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.759 [2024-11-19 00:08:39.204754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.759 [2024-11-19 00:08:39.204810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:24058 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.759 [2024-11-19 00:08:39.204830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.759 [2024-11-19 00:08:39.221928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.759 [2024-11-19 00:08:39.221993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:10917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.759 [2024-11-19 00:08:39.222011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.759 [2024-11-19 00:08:39.239015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.759 [2024-11-19 00:08:39.239077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:5083 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.759 [2024-11-19 00:08:39.239094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.759 [2024-11-19 00:08:39.256197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.759 [2024-11-19 00:08:39.256254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:16151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.759 [2024-11-19 00:08:39.256297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.759 [2024-11-19 00:08:39.273395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.759 [2024-11-19 00:08:39.273457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:15887 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.759 [2024-11-19 00:08:39.273475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.759 [2024-11-19 00:08:39.290534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.759 [2024-11-19 00:08:39.290596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:9697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.759 [2024-11-19 00:08:39.290624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.759 [2024-11-19 00:08:39.307737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.759 [2024-11-19 00:08:39.307794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:18514 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.759 [2024-11-19 00:08:39.307813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.759 [2024-11-19 00:08:39.324923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.759 [2024-11-19 00:08:39.324984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:12046 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.759 [2024-11-19 00:08:39.325002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.759 [2024-11-19 00:08:39.341977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.759 [2024-11-19 00:08:39.342040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:19398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.759 [2024-11-19 00:08:39.342067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.759 [2024-11-19 00:08:39.359123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.759 [2024-11-19 00:08:39.359179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:14086 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.759 [2024-11-19 00:08:39.359199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.759 [2024-11-19 00:08:39.376231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.759 [2024-11-19 00:08:39.376317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:10919 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.759 [2024-11-19 00:08:39.376335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.759 [2024-11-19 00:08:39.393485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.759 [2024-11-19 00:08:39.393547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:12224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.759 [2024-11-19 00:08:39.393564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.759 [2024-11-19 00:08:39.410490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.759 [2024-11-19 00:08:39.410546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:10996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.759 [2024-11-19 00:08:39.410565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.759 [2024-11-19 00:08:39.427644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.759 [2024-11-19 00:08:39.427704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:12533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.759 [2024-11-19 00:08:39.427721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.759 [2024-11-19 00:08:39.445058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:32.759 [2024-11-19 00:08:39.445121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:22568 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.759 [2024-11-19 00:08:39.445154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:33.018 [2024-11-19 00:08:39.463253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.018 [2024-11-19 00:08:39.463309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:11080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.018 [2024-11-19 00:08:39.463328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:33.018 [2024-11-19 00:08:39.480499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.018 [2024-11-19 00:08:39.480565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:9185 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.018 [2024-11-19 00:08:39.480584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:33.018 [2024-11-19 00:08:39.497690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.018 [2024-11-19 00:08:39.497754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:5760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.018 [2024-11-19 00:08:39.497772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:33.018 [2024-11-19 00:08:39.514752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.018 [2024-11-19 00:08:39.514809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:20883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.018 [2024-11-19 00:08:39.514830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:33.018 [2024-11-19 00:08:39.533805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.019 [2024-11-19 00:08:39.533871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:9903 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.019 [2024-11-19 00:08:39.533890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:33.019 [2024-11-19 00:08:39.553988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.019 [2024-11-19 00:08:39.554042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:21822 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.019 [2024-11-19 00:08:39.554062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:33.019 [2024-11-19 00:08:39.572047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.019 [2024-11-19 00:08:39.572107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:22632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.019 [2024-11-19 00:08:39.572124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:33.019 [2024-11-19 00:08:39.589293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.019 [2024-11-19 00:08:39.589354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:9493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.019 [2024-11-19 00:08:39.589371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:33.019 [2024-11-19 00:08:39.606406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.019 [2024-11-19 00:08:39.606462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:16365 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.019 [2024-11-19 00:08:39.606481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:33.019 [2024-11-19 00:08:39.623562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.019 [2024-11-19 00:08:39.623633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:18996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.019 [2024-11-19 00:08:39.623652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:33.019 [2024-11-19 00:08:39.640761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.019 [2024-11-19 00:08:39.640824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:14175 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.019 [2024-11-19 00:08:39.640842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:33.019 [2024-11-19 00:08:39.658075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.019 [2024-11-19 00:08:39.658130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:21580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.019 [2024-11-19 00:08:39.658150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:33.019 [2024-11-19 00:08:39.675256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.019 [2024-11-19 00:08:39.675316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:23534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.019 [2024-11-19 00:08:39.675334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:33.019 [2024-11-19 00:08:39.692370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.019 [2024-11-19 00:08:39.692418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:19543 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.019 [2024-11-19 00:08:39.692436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:33.278 [2024-11-19 00:08:39.711358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.278 [2024-11-19 00:08:39.711413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:15879 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.278 [2024-11-19 00:08:39.711433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:33.279 [2024-11-19 00:08:39.728619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.279 [2024-11-19 00:08:39.728721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:19013 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.279 [2024-11-19 00:08:39.728739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:33.279 [2024-11-19 00:08:39.745768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.279 [2024-11-19 00:08:39.745823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:8328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.279 [2024-11-19 00:08:39.745842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:33.279 [2024-11-19 00:08:39.762843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.279 [2024-11-19 00:08:39.762899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:2478 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.279 [2024-11-19 00:08:39.762919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:33.279 [2024-11-19 00:08:39.781467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.279 [2024-11-19 00:08:39.781529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:5872 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.279 [2024-11-19 00:08:39.781546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:33.279 14169.00 IOPS, 55.35 MiB/s [2024-11-19T00:08:39.971Z] [2024-11-19 00:08:39.798734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.279 [2024-11-19 00:08:39.798790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:2430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.279 [2024-11-19 00:08:39.798810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:33.279 [2024-11-19 00:08:39.815772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.279 [2024-11-19 00:08:39.815826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:4745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.279 [2024-11-19 00:08:39.815848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:33.279 [2024-11-19 00:08:39.833225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.279 [2024-11-19 00:08:39.833285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.279 [2024-11-19 00:08:39.833303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:33.279 [2024-11-19 00:08:39.850344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.279 [2024-11-19 00:08:39.850399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:5788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.279 [2024-11-19 00:08:39.850419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:33.279 [2024-11-19 00:08:39.867575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.279 [2024-11-19 00:08:39.867625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:15845 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.279 [2024-11-19 00:08:39.867647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:33.279 [2024-11-19 00:08:39.885522] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.279 [2024-11-19 00:08:39.885569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:1959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.279 [2024-11-19 00:08:39.885586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:33.279 [2024-11-19 00:08:39.903124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.279 [2024-11-19 00:08:39.903180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:14393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.279 [2024-11-19 00:08:39.903199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:33.279 [2024-11-19 00:08:39.927549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.279 [2024-11-19 00:08:39.927620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:6527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.279 [2024-11-19 00:08:39.927640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:33.279 [2024-11-19 00:08:39.944699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.279 [2024-11-19 00:08:39.944754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:8918 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.279 [2024-11-19 00:08:39.944775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:33.279 [2024-11-19 00:08:39.961845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.279 [2024-11-19 00:08:39.961908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:13452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.279 [2024-11-19 00:08:39.961927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:33.538 [2024-11-19 00:08:39.980221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.538 [2024-11-19 00:08:39.980305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:11690 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.538 [2024-11-19 00:08:39.980340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:33.538 [2024-11-19 00:08:39.997925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.538 [2024-11-19 00:08:39.997980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:16886 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.538 [2024-11-19 00:08:39.998002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:33.538 [2024-11-19 00:08:40.017814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.538 [2024-11-19 00:08:40.017866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:8478 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.538 [2024-11-19 00:08:40.017885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:33.538 [2024-11-19 00:08:40.038210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.538 [2024-11-19 00:08:40.038275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:18131 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.538 [2024-11-19 00:08:40.038296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:33.538 [2024-11-19 00:08:40.055739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.538 [2024-11-19 00:08:40.055801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:9923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.538 [2024-11-19 00:08:40.055818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:33.538 [2024-11-19 00:08:40.073281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.538 [2024-11-19 00:08:40.073337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:1622 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.538 [2024-11-19 00:08:40.073357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:33.538 [2024-11-19 00:08:40.090738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.538 [2024-11-19 00:08:40.090794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:16637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.538 [2024-11-19 00:08:40.090814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:33.538 [2024-11-19 00:08:40.107883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.538 [2024-11-19 00:08:40.107956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:1796 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.538 [2024-11-19 00:08:40.107973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:33.538 [2024-11-19 00:08:40.125035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.538 [2024-11-19 00:08:40.125091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:13755 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.538 [2024-11-19 00:08:40.125110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:33.538 [2024-11-19 00:08:40.142061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.538 [2024-11-19 00:08:40.142116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:2783 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.538 [2024-11-19 00:08:40.142135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:33.538 [2024-11-19 00:08:40.159271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.538 [2024-11-19 00:08:40.159333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:11295 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.538 [2024-11-19 00:08:40.159351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:33.538 [2024-11-19 00:08:40.176515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.538 [2024-11-19 00:08:40.176574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:7834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.538 [2024-11-19 00:08:40.176597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:33.538 [2024-11-19 00:08:40.195515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.538 [2024-11-19 00:08:40.195573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:11575 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.538 [2024-11-19 00:08:40.195626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:33.538 [2024-11-19 00:08:40.215130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.538 [2024-11-19 00:08:40.215192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:15299 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.538 [2024-11-19 00:08:40.215210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:33.797 [2024-11-19 00:08:40.235032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.797 [2024-11-19 00:08:40.235089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:12194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.797 [2024-11-19 00:08:40.235109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:33.797 [2024-11-19 00:08:40.253203] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.797 [2024-11-19 00:08:40.253265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:22013 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.797 [2024-11-19 00:08:40.253282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:33.797 [2024-11-19 00:08:40.271377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.797 [2024-11-19 00:08:40.271433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:17996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.797 [2024-11-19 00:08:40.271455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:33.797 [2024-11-19 00:08:40.289601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.797 [2024-11-19 00:08:40.289672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:15140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.797 [2024-11-19 00:08:40.289690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:33.797 [2024-11-19 00:08:40.307885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.797 [2024-11-19 00:08:40.307951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:16915 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.797 [2024-11-19 00:08:40.307969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:33.797 [2024-11-19 00:08:40.325852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.797 [2024-11-19 00:08:40.325910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:24199 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.797 [2024-11-19 00:08:40.325930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:33.797 [2024-11-19 00:08:40.344016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.797 [2024-11-19 00:08:40.344077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:22145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.797 [2024-11-19 00:08:40.344095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:33.797 [2024-11-19 00:08:40.362194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.797 [2024-11-19 00:08:40.362250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:14098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.797 [2024-11-19 00:08:40.362274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:33.797 [2024-11-19 00:08:40.380346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.797 [2024-11-19 00:08:40.380389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:14704 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.797 [2024-11-19 00:08:40.380410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:33.798 [2024-11-19 00:08:40.398176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.798 [2024-11-19 00:08:40.398239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:14465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.798 [2024-11-19 00:08:40.398257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:33.798 [2024-11-19 00:08:40.416095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.798 [2024-11-19 00:08:40.416153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:19914 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.798 [2024-11-19 00:08:40.416173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:33.798 [2024-11-19 00:08:40.434227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.798 [2024-11-19 00:08:40.434283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:25487 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.798 [2024-11-19 00:08:40.434300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:33.798 [2024-11-19 00:08:40.451977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.798 [2024-11-19 00:08:40.452032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:18726 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.798 [2024-11-19 00:08:40.452048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:33.798 [2024-11-19 00:08:40.469143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:33.798 [2024-11-19 00:08:40.469199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:15654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.798 [2024-11-19 00:08:40.469216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:34.057 [2024-11-19 00:08:40.487303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:34.057 [2024-11-19 00:08:40.487360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:11403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.057 [2024-11-19 00:08:40.487376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:34.057 [2024-11-19 00:08:40.505247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:34.057 [2024-11-19 00:08:40.505311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:22670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.057 [2024-11-19 00:08:40.505330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:34.057 [2024-11-19 00:08:40.522494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:34.057 [2024-11-19 00:08:40.522550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:13331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.057 [2024-11-19 00:08:40.522566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:34.057 [2024-11-19 00:08:40.539734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:34.057 [2024-11-19 00:08:40.539789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.058 [2024-11-19 00:08:40.539805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:34.058 [2024-11-19 00:08:40.559594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:34.058 [2024-11-19 00:08:40.559698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:414 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.058 [2024-11-19 00:08:40.559720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:34.058 [2024-11-19 00:08:40.578863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:34.058 [2024-11-19 00:08:40.578920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:24559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.058 [2024-11-19 00:08:40.578936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:34.058 [2024-11-19 00:08:40.596311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:34.058 [2024-11-19 00:08:40.596367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:22149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.058 [2024-11-19 00:08:40.596384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:34.058 [2024-11-19 00:08:40.613513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:34.058 [2024-11-19 00:08:40.613569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:13316 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.058 [2024-11-19 00:08:40.613586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:34.058 [2024-11-19 00:08:40.630168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:34.058 [2024-11-19 00:08:40.630223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:15429 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.058 [2024-11-19 00:08:40.630239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:34.058 [2024-11-19 00:08:40.646930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:34.058 [2024-11-19 00:08:40.646985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:10115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.058 [2024-11-19 00:08:40.647001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:34.058 [2024-11-19 00:08:40.663965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:34.058 [2024-11-19 00:08:40.664020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:24631 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.058 [2024-11-19 00:08:40.664036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:34.058 [2024-11-19 00:08:40.680817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:34.058 [2024-11-19 00:08:40.680874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:24416 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.058 [2024-11-19 00:08:40.680890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:34.058 [2024-11-19 00:08:40.697703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:34.058 [2024-11-19 00:08:40.697758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:23095 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.058 [2024-11-19 00:08:40.697774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:34.058 [2024-11-19 00:08:40.714812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:34.058 [2024-11-19 00:08:40.714867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:1806 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.058 [2024-11-19 00:08:40.714882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:34.058 [2024-11-19 00:08:40.731562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:34.058 [2024-11-19 00:08:40.731626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:24153 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.058 [2024-11-19 00:08:40.731644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:34.317 [2024-11-19 00:08:40.749568] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:34.317 [2024-11-19 00:08:40.749636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:17324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.317 [2024-11-19 00:08:40.749684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:34.317 [2024-11-19 00:08:40.766580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:34.317 [2024-11-19 00:08:40.766643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.317 [2024-11-19 00:08:40.766659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:34.317 14232.00 IOPS, 55.59 MiB/s [2024-11-19T00:08:41.009Z] [2024-11-19 00:08:40.783799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:34.317 [2024-11-19 00:08:40.783854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:8766 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.317 [2024-11-19 00:08:40.783871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:34.317 00:24:34.317 Latency(us) 00:24:34.317 [2024-11-19T00:08:41.009Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:34.317 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:24:34.317 nvme0n1 : 2.01 14251.30 55.67 0.00 0.00 8974.33 8162.21 32887.16 00:24:34.317 [2024-11-19T00:08:41.009Z] =================================================================================================================== 00:24:34.317 [2024-11-19T00:08:41.009Z] Total : 14251.30 55.67 0.00 0.00 8974.33 8162.21 32887.16 00:24:34.317 { 00:24:34.317 "results": [ 00:24:34.317 { 00:24:34.317 "job": "nvme0n1", 00:24:34.318 "core_mask": "0x2", 00:24:34.318 "workload": "randread", 00:24:34.318 "status": "finished", 00:24:34.318 "queue_depth": 128, 00:24:34.318 "io_size": 4096, 00:24:34.318 "runtime": 2.006273, 00:24:34.318 "iops": 14251.300795056306, 00:24:34.318 "mibps": 55.669143730688695, 00:24:34.318 "io_failed": 0, 00:24:34.318 "io_timeout": 0, 00:24:34.318 "avg_latency_us": 8974.328290176527, 00:24:34.318 "min_latency_us": 8162.210909090909, 00:24:34.318 "max_latency_us": 32887.156363636364 00:24:34.318 } 00:24:34.318 ], 00:24:34.318 "core_count": 1 00:24:34.318 } 00:24:34.318 00:08:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:34.318 00:08:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:34.318 | .driver_specific 00:24:34.318 | .nvme_error 00:24:34.318 | .status_code 00:24:34.318 | .command_transient_transport_error' 00:24:34.318 00:08:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:34.318 00:08:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:34.577 00:08:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 112 > 0 )) 00:24:34.577 00:08:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 86373 00:24:34.577 00:08:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 86373 ']' 00:24:34.577 00:08:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 86373 00:24:34.577 00:08:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:24:34.577 00:08:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:34.577 00:08:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86373 00:24:34.577 00:08:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:34.577 killing process with pid 86373 00:24:34.577 00:08:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:34.577 00:08:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86373' 00:24:34.577 Received shutdown signal, test time was about 2.000000 seconds 00:24:34.577 00:24:34.577 Latency(us) 00:24:34.577 [2024-11-19T00:08:41.269Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:34.577 [2024-11-19T00:08:41.269Z] =================================================================================================================== 00:24:34.577 [2024-11-19T00:08:41.269Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:34.577 00:08:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 86373 00:24:34.577 00:08:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 86373 00:24:35.516 00:08:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:24:35.516 00:08:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:35.516 00:08:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:24:35.516 00:08:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:24:35.516 00:08:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:24:35.516 00:08:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=86440 00:24:35.516 00:08:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 86440 /var/tmp/bperf.sock 00:24:35.516 00:08:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:24:35.516 00:08:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 86440 ']' 00:24:35.516 00:08:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:35.516 00:08:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:35.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:35.516 00:08:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:35.516 00:08:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:35.516 00:08:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:35.516 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:35.516 Zero copy mechanism will not be used. 00:24:35.516 [2024-11-19 00:08:41.933125] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:24:35.516 [2024-11-19 00:08:41.933259] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86440 ] 00:24:35.516 [2024-11-19 00:08:42.094907] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:35.516 [2024-11-19 00:08:42.175215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:35.775 [2024-11-19 00:08:42.319642] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:36.344 00:08:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:36.344 00:08:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:24:36.344 00:08:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:36.344 00:08:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:36.603 00:08:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:36.603 00:08:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.603 00:08:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:36.603 00:08:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.603 00:08:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:36.603 00:08:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:36.863 nvme0n1 00:24:36.863 00:08:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:24:36.863 00:08:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.863 00:08:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:36.863 00:08:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.863 00:08:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:36.863 00:08:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:36.863 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:36.863 Zero copy mechanism will not be used. 00:24:36.863 Running I/O for 2 seconds... 00:24:36.863 [2024-11-19 00:08:43.503883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.863 [2024-11-19 00:08:43.503957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.863 [2024-11-19 00:08:43.503977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:36.863 [2024-11-19 00:08:43.508771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.863 [2024-11-19 00:08:43.508815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.863 [2024-11-19 00:08:43.508838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:36.863 [2024-11-19 00:08:43.513597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.863 [2024-11-19 00:08:43.513668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.863 [2024-11-19 00:08:43.513689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:36.863 [2024-11-19 00:08:43.518078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.863 [2024-11-19 00:08:43.518142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.863 [2024-11-19 00:08:43.518160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:36.863 [2024-11-19 00:08:43.522711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.863 [2024-11-19 00:08:43.522766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.863 [2024-11-19 00:08:43.522786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:36.863 [2024-11-19 00:08:43.527144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.863 [2024-11-19 00:08:43.527200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.863 [2024-11-19 00:08:43.527220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:36.863 [2024-11-19 00:08:43.531655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.863 [2024-11-19 00:08:43.531714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.863 [2024-11-19 00:08:43.531732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:36.863 [2024-11-19 00:08:43.536079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.863 [2024-11-19 00:08:43.536140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.863 [2024-11-19 00:08:43.536157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:36.863 [2024-11-19 00:08:43.540666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.863 [2024-11-19 00:08:43.540736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.863 [2024-11-19 00:08:43.540756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:36.863 [2024-11-19 00:08:43.545156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.863 [2024-11-19 00:08:43.545211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.863 [2024-11-19 00:08:43.545231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:36.864 [2024-11-19 00:08:43.550266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:36.864 [2024-11-19 00:08:43.550328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.864 [2024-11-19 00:08:43.550346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:37.125 [2024-11-19 00:08:43.555357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.125 [2024-11-19 00:08:43.555439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.125 [2024-11-19 00:08:43.555458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:37.125 [2024-11-19 00:08:43.560016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.125 [2024-11-19 00:08:43.560071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.125 [2024-11-19 00:08:43.560091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:37.125 [2024-11-19 00:08:43.564546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.125 [2024-11-19 00:08:43.564604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.125 [2024-11-19 00:08:43.564650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:37.125 [2024-11-19 00:08:43.569067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.125 [2024-11-19 00:08:43.569129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.125 [2024-11-19 00:08:43.569147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:37.125 [2024-11-19 00:08:43.573548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.125 [2024-11-19 00:08:43.573608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.125 [2024-11-19 00:08:43.573640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:37.125 [2024-11-19 00:08:43.578115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.125 [2024-11-19 00:08:43.578170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.125 [2024-11-19 00:08:43.578189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:37.125 [2024-11-19 00:08:43.582694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.125 [2024-11-19 00:08:43.582754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.125 [2024-11-19 00:08:43.582771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:37.125 [2024-11-19 00:08:43.587761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.125 [2024-11-19 00:08:43.587831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.125 [2024-11-19 00:08:43.587848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:37.125 [2024-11-19 00:08:43.592304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.125 [2024-11-19 00:08:43.592380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.125 [2024-11-19 00:08:43.592406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:37.125 [2024-11-19 00:08:43.597094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.125 [2024-11-19 00:08:43.597149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.125 [2024-11-19 00:08:43.597171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:37.125 [2024-11-19 00:08:43.601674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.125 [2024-11-19 00:08:43.601734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.125 [2024-11-19 00:08:43.601752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:37.125 [2024-11-19 00:08:43.606472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.125 [2024-11-19 00:08:43.606535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.125 [2024-11-19 00:08:43.606552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:37.125 [2024-11-19 00:08:43.611488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.125 [2024-11-19 00:08:43.611543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.125 [2024-11-19 00:08:43.611563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:37.125 [2024-11-19 00:08:43.616474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.125 [2024-11-19 00:08:43.616521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.125 [2024-11-19 00:08:43.616559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:37.125 [2024-11-19 00:08:43.621585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.125 [2024-11-19 00:08:43.621696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.125 [2024-11-19 00:08:43.621716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:37.125 [2024-11-19 00:08:43.627303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.125 [2024-11-19 00:08:43.627369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.125 [2024-11-19 00:08:43.627389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:37.125 [2024-11-19 00:08:43.632508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.125 [2024-11-19 00:08:43.632555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.125 [2024-11-19 00:08:43.632578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:37.125 [2024-11-19 00:08:43.637694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.125 [2024-11-19 00:08:43.637769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.125 [2024-11-19 00:08:43.637802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:37.125 [2024-11-19 00:08:43.642754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.125 [2024-11-19 00:08:43.642818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.125 [2024-11-19 00:08:43.642837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:37.125 [2024-11-19 00:08:43.647617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.125 [2024-11-19 00:08:43.647708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.125 [2024-11-19 00:08:43.647737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:37.125 [2024-11-19 00:08:43.652124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.126 [2024-11-19 00:08:43.652178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.126 [2024-11-19 00:08:43.652198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:37.126 [2024-11-19 00:08:43.656786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.126 [2024-11-19 00:08:43.656839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.126 [2024-11-19 00:08:43.656859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:37.126 [2024-11-19 00:08:43.661345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.126 [2024-11-19 00:08:43.661408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.126 [2024-11-19 00:08:43.661425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:37.126 [2024-11-19 00:08:43.665995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.126 [2024-11-19 00:08:43.666073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.126 [2024-11-19 00:08:43.666091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:37.126 [2024-11-19 00:08:43.670607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.126 [2024-11-19 00:08:43.670676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.126 [2024-11-19 00:08:43.670695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:37.126 [2024-11-19 00:08:43.675141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.126 [2024-11-19 00:08:43.675196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.126 [2024-11-19 00:08:43.675215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:37.126 [2024-11-19 00:08:43.679661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.126 [2024-11-19 00:08:43.679720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.126 [2024-11-19 00:08:43.679737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:37.126 [2024-11-19 00:08:43.684259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.126 [2024-11-19 00:08:43.684345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.126 [2024-11-19 00:08:43.684364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:37.126 [2024-11-19 00:08:43.689007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.126 [2024-11-19 00:08:43.689061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.126 [2024-11-19 00:08:43.689081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:37.126 [2024-11-19 00:08:43.693530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.126 [2024-11-19 00:08:43.693585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.126 [2024-11-19 00:08:43.693620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:37.126 [2024-11-19 00:08:43.697969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.126 [2024-11-19 00:08:43.698046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.126 [2024-11-19 00:08:43.698063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:37.126 [2024-11-19 00:08:43.702536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.126 [2024-11-19 00:08:43.702590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.126 [2024-11-19 00:08:43.702622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:37.126 [2024-11-19 00:08:43.707002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.126 [2024-11-19 00:08:43.707056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.126 [2024-11-19 00:08:43.707077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:37.126 [2024-11-19 00:08:43.711558] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.126 [2024-11-19 00:08:43.711629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.126 [2024-11-19 00:08:43.711648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:37.126 [2024-11-19 00:08:43.716063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.126 [2024-11-19 00:08:43.716123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.126 [2024-11-19 00:08:43.716141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:37.126 [2024-11-19 00:08:43.720768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.126 [2024-11-19 00:08:43.720822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.126 [2024-11-19 00:08:43.720841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:37.126 [2024-11-19 00:08:43.725313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.126 [2024-11-19 00:08:43.725367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.126 [2024-11-19 00:08:43.725389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:37.126 [2024-11-19 00:08:43.730023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.126 [2024-11-19 00:08:43.730087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.126 [2024-11-19 00:08:43.730105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:37.126 [2024-11-19 00:08:43.734635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.126 [2024-11-19 00:08:43.734696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.126 [2024-11-19 00:08:43.734713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:37.126 [2024-11-19 00:08:43.739098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.126 [2024-11-19 00:08:43.739153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.126 [2024-11-19 00:08:43.739172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:37.126 [2024-11-19 00:08:43.743673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.126 [2024-11-19 00:08:43.743727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.126 [2024-11-19 00:08:43.743746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:37.126 [2024-11-19 00:08:43.748107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.126 [2024-11-19 00:08:43.748169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.126 [2024-11-19 00:08:43.748186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:37.126 [2024-11-19 00:08:43.752730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.126 [2024-11-19 00:08:43.752784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.126 [2024-11-19 00:08:43.752803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:37.126 [2024-11-19 00:08:43.757172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.126 [2024-11-19 00:08:43.757227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.126 [2024-11-19 00:08:43.757249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:37.126 [2024-11-19 00:08:43.761715] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.126 [2024-11-19 00:08:43.761778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.126 [2024-11-19 00:08:43.761796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:37.126 [2024-11-19 00:08:43.766205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.126 [2024-11-19 00:08:43.766266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.126 [2024-11-19 00:08:43.766283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:37.126 [2024-11-19 00:08:43.770778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.126 [2024-11-19 00:08:43.770832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.127 [2024-11-19 00:08:43.770855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:37.127 [2024-11-19 00:08:43.775310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.127 [2024-11-19 00:08:43.775365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.127 [2024-11-19 00:08:43.775384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:37.127 [2024-11-19 00:08:43.779775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.127 [2024-11-19 00:08:43.779834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.127 [2024-11-19 00:08:43.779852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:37.127 [2024-11-19 00:08:43.784387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.127 [2024-11-19 00:08:43.784449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.127 [2024-11-19 00:08:43.784467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:37.127 [2024-11-19 00:08:43.789039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.127 [2024-11-19 00:08:43.789094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.127 [2024-11-19 00:08:43.789113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:37.127 [2024-11-19 00:08:43.793520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.127 [2024-11-19 00:08:43.793585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.127 [2024-11-19 00:08:43.793603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:37.127 [2024-11-19 00:08:43.798049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.127 [2024-11-19 00:08:43.798109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.127 [2024-11-19 00:08:43.798126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:37.127 [2024-11-19 00:08:43.802624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.127 [2024-11-19 00:08:43.802676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.127 [2024-11-19 00:08:43.802696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:37.127 [2024-11-19 00:08:43.807234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.127 [2024-11-19 00:08:43.807291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.127 [2024-11-19 00:08:43.807311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:37.388 [2024-11-19 00:08:43.812444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.388 [2024-11-19 00:08:43.812509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.388 [2024-11-19 00:08:43.812529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:37.388 [2024-11-19 00:08:43.817437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.388 [2024-11-19 00:08:43.817497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.388 [2024-11-19 00:08:43.817515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:37.388 [2024-11-19 00:08:43.822299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.388 [2024-11-19 00:08:43.822353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.388 [2024-11-19 00:08:43.822373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:37.388 [2024-11-19 00:08:43.826831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.388 [2024-11-19 00:08:43.826885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.388 [2024-11-19 00:08:43.826905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:37.388 [2024-11-19 00:08:43.831321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.388 [2024-11-19 00:08:43.831382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.388 [2024-11-19 00:08:43.831399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:37.388 [2024-11-19 00:08:43.836063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.388 [2024-11-19 00:08:43.836127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.388 [2024-11-19 00:08:43.836145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:37.388 [2024-11-19 00:08:43.840801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.388 [2024-11-19 00:08:43.840858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.388 [2024-11-19 00:08:43.840878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:37.388 [2024-11-19 00:08:43.845530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.388 [2024-11-19 00:08:43.845595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.388 [2024-11-19 00:08:43.845642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:37.388 [2024-11-19 00:08:43.850072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.388 [2024-11-19 00:08:43.850131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.388 [2024-11-19 00:08:43.850148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:37.388 [2024-11-19 00:08:43.854570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.388 [2024-11-19 00:08:43.854642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.388 [2024-11-19 00:08:43.854660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:37.388 [2024-11-19 00:08:43.859145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.388 [2024-11-19 00:08:43.859200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.388 [2024-11-19 00:08:43.859219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:37.388 [2024-11-19 00:08:43.863607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.388 [2024-11-19 00:08:43.863666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.388 [2024-11-19 00:08:43.863683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:37.388 [2024-11-19 00:08:43.868144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.388 [2024-11-19 00:08:43.868208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.388 [2024-11-19 00:08:43.868227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:37.388 [2024-11-19 00:08:43.872839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.388 [2024-11-19 00:08:43.872893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.388 [2024-11-19 00:08:43.872913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:37.388 [2024-11-19 00:08:43.877439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.388 [2024-11-19 00:08:43.877493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.388 [2024-11-19 00:08:43.877530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:37.388 [2024-11-19 00:08:43.882063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.388 [2024-11-19 00:08:43.882124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.388 [2024-11-19 00:08:43.882141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:37.388 [2024-11-19 00:08:43.886595] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.388 [2024-11-19 00:08:43.886669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.388 [2024-11-19 00:08:43.886686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:37.388 [2024-11-19 00:08:43.891154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.389 [2024-11-19 00:08:43.891208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.389 [2024-11-19 00:08:43.891227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:37.389 [2024-11-19 00:08:43.895670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.389 [2024-11-19 00:08:43.895724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.389 [2024-11-19 00:08:43.895743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:37.389 [2024-11-19 00:08:43.900222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.389 [2024-11-19 00:08:43.900308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.389 [2024-11-19 00:08:43.900344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:37.389 [2024-11-19 00:08:43.905013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.389 [2024-11-19 00:08:43.905074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.389 [2024-11-19 00:08:43.905092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:37.389 [2024-11-19 00:08:43.909510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.389 [2024-11-19 00:08:43.909564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.389 [2024-11-19 00:08:43.909583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:37.389 [2024-11-19 00:08:43.914066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.389 [2024-11-19 00:08:43.914127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.389 [2024-11-19 00:08:43.914144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:37.389 [2024-11-19 00:08:43.918931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.389 [2024-11-19 00:08:43.919011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.389 [2024-11-19 00:08:43.919043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:37.389 [2024-11-19 00:08:43.923883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.389 [2024-11-19 00:08:43.923940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.389 [2024-11-19 00:08:43.923961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:37.389 [2024-11-19 00:08:43.928838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.389 [2024-11-19 00:08:43.928894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.389 [2024-11-19 00:08:43.928915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:37.389 [2024-11-19 00:08:43.934106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.389 [2024-11-19 00:08:43.934171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.389 [2024-11-19 00:08:43.934190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:37.389 [2024-11-19 00:08:43.939226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.389 [2024-11-19 00:08:43.939288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.389 [2024-11-19 00:08:43.939306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:37.389 [2024-11-19 00:08:43.944418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.389 [2024-11-19 00:08:43.944464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.389 [2024-11-19 00:08:43.944492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:37.389 [2024-11-19 00:08:43.949442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.389 [2024-11-19 00:08:43.949498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.389 [2024-11-19 00:08:43.949517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:37.389 [2024-11-19 00:08:43.954229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.389 [2024-11-19 00:08:43.954292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.389 [2024-11-19 00:08:43.954310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:37.389 [2024-11-19 00:08:43.959077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.389 [2024-11-19 00:08:43.959140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.389 [2024-11-19 00:08:43.959158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:37.389 [2024-11-19 00:08:43.963785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.389 [2024-11-19 00:08:43.963841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.389 [2024-11-19 00:08:43.963861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:37.389 [2024-11-19 00:08:43.968404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.389 [2024-11-19 00:08:43.968447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.389 [2024-11-19 00:08:43.968468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:37.389 [2024-11-19 00:08:43.973407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.389 [2024-11-19 00:08:43.973455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.389 [2024-11-19 00:08:43.973473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:37.389 [2024-11-19 00:08:43.978158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.389 [2024-11-19 00:08:43.978222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.389 [2024-11-19 00:08:43.978241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:37.389 [2024-11-19 00:08:43.982868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.389 [2024-11-19 00:08:43.982924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.389 [2024-11-19 00:08:43.982944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:37.389 [2024-11-19 00:08:43.987448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.389 [2024-11-19 00:08:43.987505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.389 [2024-11-19 00:08:43.987527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:37.389 [2024-11-19 00:08:43.992173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.389 [2024-11-19 00:08:43.992251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.389 [2024-11-19 00:08:43.992270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:37.389 [2024-11-19 00:08:43.997150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.389 [2024-11-19 00:08:43.997212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.389 [2024-11-19 00:08:43.997236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:37.389 [2024-11-19 00:08:44.002051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.389 [2024-11-19 00:08:44.002106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.389 [2024-11-19 00:08:44.002126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:37.389 [2024-11-19 00:08:44.006743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.389 [2024-11-19 00:08:44.006798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.389 [2024-11-19 00:08:44.006818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:37.389 [2024-11-19 00:08:44.011417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.389 [2024-11-19 00:08:44.011483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.389 [2024-11-19 00:08:44.011501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:37.389 [2024-11-19 00:08:44.016145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.389 [2024-11-19 00:08:44.016388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.390 [2024-11-19 00:08:44.016416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:37.390 [2024-11-19 00:08:44.021171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.390 [2024-11-19 00:08:44.021232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.390 [2024-11-19 00:08:44.021255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:37.390 [2024-11-19 00:08:44.026163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.390 [2024-11-19 00:08:44.026224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.390 [2024-11-19 00:08:44.026246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:37.390 [2024-11-19 00:08:44.030882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.390 [2024-11-19 00:08:44.030950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.390 [2024-11-19 00:08:44.030969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:37.390 [2024-11-19 00:08:44.035554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.390 [2024-11-19 00:08:44.035652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.390 [2024-11-19 00:08:44.035674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:37.390 [2024-11-19 00:08:44.040156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.390 [2024-11-19 00:08:44.040216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.390 [2024-11-19 00:08:44.040238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:37.390 [2024-11-19 00:08:44.044868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.390 [2024-11-19 00:08:44.044934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.390 [2024-11-19 00:08:44.044952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:37.390 [2024-11-19 00:08:44.049699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.390 [2024-11-19 00:08:44.049760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.390 [2024-11-19 00:08:44.049779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:37.390 [2024-11-19 00:08:44.054325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.390 [2024-11-19 00:08:44.054372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.390 [2024-11-19 00:08:44.054395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:37.390 [2024-11-19 00:08:44.059073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.390 [2024-11-19 00:08:44.059133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.390 [2024-11-19 00:08:44.059170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:37.390 [2024-11-19 00:08:44.063758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.390 [2024-11-19 00:08:44.063824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.390 [2024-11-19 00:08:44.063846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:37.390 [2024-11-19 00:08:44.068483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.390 [2024-11-19 00:08:44.068557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.390 [2024-11-19 00:08:44.068578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:37.390 [2024-11-19 00:08:44.073679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.390 [2024-11-19 00:08:44.073878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.390 [2024-11-19 00:08:44.073908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:37.651 [2024-11-19 00:08:44.078928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.651 [2024-11-19 00:08:44.078990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.651 [2024-11-19 00:08:44.079026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:37.651 [2024-11-19 00:08:44.083980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.651 [2024-11-19 00:08:44.084049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.651 [2024-11-19 00:08:44.084068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:37.651 [2024-11-19 00:08:44.088778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.651 [2024-11-19 00:08:44.088843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.651 [2024-11-19 00:08:44.088862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:37.651 [2024-11-19 00:08:44.093398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.651 [2024-11-19 00:08:44.093458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.651 [2024-11-19 00:08:44.093480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:37.651 [2024-11-19 00:08:44.098235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.651 [2024-11-19 00:08:44.098295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.651 [2024-11-19 00:08:44.098319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:37.651 [2024-11-19 00:08:44.103226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.651 [2024-11-19 00:08:44.103287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.651 [2024-11-19 00:08:44.103304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:37.651 [2024-11-19 00:08:44.108138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.651 [2024-11-19 00:08:44.108375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.651 [2024-11-19 00:08:44.108399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:37.651 [2024-11-19 00:08:44.113106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.651 [2024-11-19 00:08:44.113168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.651 [2024-11-19 00:08:44.113186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:37.651 [2024-11-19 00:08:44.117779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.651 [2024-11-19 00:08:44.117839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.651 [2024-11-19 00:08:44.117857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:37.651 [2024-11-19 00:08:44.122881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.651 [2024-11-19 00:08:44.122947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.651 [2024-11-19 00:08:44.122981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:37.651 [2024-11-19 00:08:44.127724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.651 [2024-11-19 00:08:44.127784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.651 [2024-11-19 00:08:44.127802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:37.651 [2024-11-19 00:08:44.132453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.651 [2024-11-19 00:08:44.132505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.651 [2024-11-19 00:08:44.132526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:37.651 [2024-11-19 00:08:44.137144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.651 [2024-11-19 00:08:44.137205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.651 [2024-11-19 00:08:44.137223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:37.651 [2024-11-19 00:08:44.141778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.651 [2024-11-19 00:08:44.141838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.651 [2024-11-19 00:08:44.141855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:37.651 [2024-11-19 00:08:44.146650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.651 [2024-11-19 00:08:44.146711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.651 [2024-11-19 00:08:44.146729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:37.651 [2024-11-19 00:08:44.151279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.651 [2024-11-19 00:08:44.151340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.651 [2024-11-19 00:08:44.151357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:37.651 [2024-11-19 00:08:44.156020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.651 [2024-11-19 00:08:44.156080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.652 [2024-11-19 00:08:44.156098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:37.652 [2024-11-19 00:08:44.160926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.652 [2024-11-19 00:08:44.160986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.652 [2024-11-19 00:08:44.161004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:37.652 [2024-11-19 00:08:44.165849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.652 [2024-11-19 00:08:44.165909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.652 [2024-11-19 00:08:44.165927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:37.652 [2024-11-19 00:08:44.170489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.652 [2024-11-19 00:08:44.170550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.652 [2024-11-19 00:08:44.170567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:37.652 [2024-11-19 00:08:44.175303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.652 [2024-11-19 00:08:44.175363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.652 [2024-11-19 00:08:44.175381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:37.652 [2024-11-19 00:08:44.180006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.652 [2024-11-19 00:08:44.180205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.652 [2024-11-19 00:08:44.180228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:37.652 [2024-11-19 00:08:44.185288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.652 [2024-11-19 00:08:44.185335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.652 [2024-11-19 00:08:44.185353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:37.652 [2024-11-19 00:08:44.190115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.652 [2024-11-19 00:08:44.190161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.652 [2024-11-19 00:08:44.190178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:37.652 [2024-11-19 00:08:44.194750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.652 [2024-11-19 00:08:44.194809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.652 [2024-11-19 00:08:44.194827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:37.652 [2024-11-19 00:08:44.199475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.652 [2024-11-19 00:08:44.199533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.652 [2024-11-19 00:08:44.199550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:37.652 [2024-11-19 00:08:44.204075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.652 [2024-11-19 00:08:44.204134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.652 [2024-11-19 00:08:44.204152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:37.652 [2024-11-19 00:08:44.208816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.652 [2024-11-19 00:08:44.208873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.652 [2024-11-19 00:08:44.208890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:37.652 [2024-11-19 00:08:44.213364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.652 [2024-11-19 00:08:44.213422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.652 [2024-11-19 00:08:44.213440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:37.652 [2024-11-19 00:08:44.217907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.652 [2024-11-19 00:08:44.217965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.652 [2024-11-19 00:08:44.217982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:37.652 [2024-11-19 00:08:44.222386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.652 [2024-11-19 00:08:44.222444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.652 [2024-11-19 00:08:44.222461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:37.652 [2024-11-19 00:08:44.227190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.652 [2024-11-19 00:08:44.227249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.652 [2024-11-19 00:08:44.227266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:37.652 [2024-11-19 00:08:44.231644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.652 [2024-11-19 00:08:44.231700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.652 [2024-11-19 00:08:44.231717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:37.652 [2024-11-19 00:08:44.236174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.652 [2024-11-19 00:08:44.236231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.652 [2024-11-19 00:08:44.236249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:37.652 [2024-11-19 00:08:44.240726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.652 [2024-11-19 00:08:44.240784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.652 [2024-11-19 00:08:44.240801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:37.652 [2024-11-19 00:08:44.245244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.652 [2024-11-19 00:08:44.245303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.652 [2024-11-19 00:08:44.245321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:37.652 [2024-11-19 00:08:44.249877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.652 [2024-11-19 00:08:44.249936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.652 [2024-11-19 00:08:44.249969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:37.652 [2024-11-19 00:08:44.254386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.652 [2024-11-19 00:08:44.254445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.652 [2024-11-19 00:08:44.254463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:37.652 [2024-11-19 00:08:44.259032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.652 [2024-11-19 00:08:44.259091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.652 [2024-11-19 00:08:44.259108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:37.652 [2024-11-19 00:08:44.263700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.652 [2024-11-19 00:08:44.263744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.652 [2024-11-19 00:08:44.263761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:37.652 [2024-11-19 00:08:44.268111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.652 [2024-11-19 00:08:44.268170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.652 [2024-11-19 00:08:44.268188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:37.652 [2024-11-19 00:08:44.272722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.652 [2024-11-19 00:08:44.272778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.653 [2024-11-19 00:08:44.272795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:37.653 [2024-11-19 00:08:44.277226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.653 [2024-11-19 00:08:44.277270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.653 [2024-11-19 00:08:44.277302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:37.653 [2024-11-19 00:08:44.281870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.653 [2024-11-19 00:08:44.281928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.653 [2024-11-19 00:08:44.281946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:37.653 [2024-11-19 00:08:44.286363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.653 [2024-11-19 00:08:44.286423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.653 [2024-11-19 00:08:44.286441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:37.653 [2024-11-19 00:08:44.290946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.653 [2024-11-19 00:08:44.291006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.653 [2024-11-19 00:08:44.291038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:37.653 [2024-11-19 00:08:44.295427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.653 [2024-11-19 00:08:44.295655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.653 [2024-11-19 00:08:44.295680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:37.653 [2024-11-19 00:08:44.300306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.653 [2024-11-19 00:08:44.300383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.653 [2024-11-19 00:08:44.300401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:37.653 [2024-11-19 00:08:44.304867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.653 [2024-11-19 00:08:44.304925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.653 [2024-11-19 00:08:44.304942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:37.653 [2024-11-19 00:08:44.309388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.653 [2024-11-19 00:08:44.309447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.653 [2024-11-19 00:08:44.309464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:37.653 [2024-11-19 00:08:44.315982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.653 [2024-11-19 00:08:44.316047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.653 [2024-11-19 00:08:44.316065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:37.653 [2024-11-19 00:08:44.320717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.653 [2024-11-19 00:08:44.320775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.653 [2024-11-19 00:08:44.320792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:37.653 [2024-11-19 00:08:44.325074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.653 [2024-11-19 00:08:44.325132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.653 [2024-11-19 00:08:44.325149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:37.653 [2024-11-19 00:08:44.329559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.653 [2024-11-19 00:08:44.329646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.653 [2024-11-19 00:08:44.329665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:37.653 [2024-11-19 00:08:44.334323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.653 [2024-11-19 00:08:44.334541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.653 [2024-11-19 00:08:44.334563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:37.914 [2024-11-19 00:08:44.339752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.914 [2024-11-19 00:08:44.339812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.914 [2024-11-19 00:08:44.339830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:37.914 [2024-11-19 00:08:44.344649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.914 [2024-11-19 00:08:44.344754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.914 [2024-11-19 00:08:44.344788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:37.914 [2024-11-19 00:08:44.349356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.914 [2024-11-19 00:08:44.349415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.914 [2024-11-19 00:08:44.349432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:37.914 [2024-11-19 00:08:44.354065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.914 [2024-11-19 00:08:44.354265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.914 [2024-11-19 00:08:44.354288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:37.914 [2024-11-19 00:08:44.359097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.914 [2024-11-19 00:08:44.359160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.914 [2024-11-19 00:08:44.359177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:37.914 [2024-11-19 00:08:44.363708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.914 [2024-11-19 00:08:44.363766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.914 [2024-11-19 00:08:44.363783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:37.914 [2024-11-19 00:08:44.368248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.914 [2024-11-19 00:08:44.368333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.914 [2024-11-19 00:08:44.368354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:37.914 [2024-11-19 00:08:44.372974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.914 [2024-11-19 00:08:44.373033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.914 [2024-11-19 00:08:44.373050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:37.914 [2024-11-19 00:08:44.377650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.914 [2024-11-19 00:08:44.377708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.914 [2024-11-19 00:08:44.377725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:37.914 [2024-11-19 00:08:44.382061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.914 [2024-11-19 00:08:44.382121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.914 [2024-11-19 00:08:44.382139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:37.914 [2024-11-19 00:08:44.386594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.914 [2024-11-19 00:08:44.386679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.914 [2024-11-19 00:08:44.386696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:37.914 [2024-11-19 00:08:44.391101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.914 [2024-11-19 00:08:44.391160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.914 [2024-11-19 00:08:44.391178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:37.914 [2024-11-19 00:08:44.395617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.914 [2024-11-19 00:08:44.395689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.914 [2024-11-19 00:08:44.395706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:37.914 [2024-11-19 00:08:44.400135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.914 [2024-11-19 00:08:44.400194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.914 [2024-11-19 00:08:44.400212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:37.914 [2024-11-19 00:08:44.404745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.914 [2024-11-19 00:08:44.404802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.914 [2024-11-19 00:08:44.404819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:37.914 [2024-11-19 00:08:44.409153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.914 [2024-11-19 00:08:44.409212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.914 [2024-11-19 00:08:44.409229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:37.914 [2024-11-19 00:08:44.413733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.914 [2024-11-19 00:08:44.413791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.914 [2024-11-19 00:08:44.413808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:37.914 [2024-11-19 00:08:44.418184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.914 [2024-11-19 00:08:44.418242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.914 [2024-11-19 00:08:44.418260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:37.914 [2024-11-19 00:08:44.422750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.914 [2024-11-19 00:08:44.422810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.914 [2024-11-19 00:08:44.422827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:37.915 [2024-11-19 00:08:44.427264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.915 [2024-11-19 00:08:44.427323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.915 [2024-11-19 00:08:44.427341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:37.915 [2024-11-19 00:08:44.431851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.915 [2024-11-19 00:08:44.431910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.915 [2024-11-19 00:08:44.431927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:37.915 [2024-11-19 00:08:44.436384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.915 [2024-11-19 00:08:44.436445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.915 [2024-11-19 00:08:44.436463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:37.915 [2024-11-19 00:08:44.440918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.915 [2024-11-19 00:08:44.440977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.915 [2024-11-19 00:08:44.440994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:37.915 [2024-11-19 00:08:44.445425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.915 [2024-11-19 00:08:44.445484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.915 [2024-11-19 00:08:44.445502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:37.915 [2024-11-19 00:08:44.450126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.915 [2024-11-19 00:08:44.450328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.915 [2024-11-19 00:08:44.450351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:37.915 [2024-11-19 00:08:44.454964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.915 [2024-11-19 00:08:44.455041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.915 [2024-11-19 00:08:44.455058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:37.915 [2024-11-19 00:08:44.459566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.915 [2024-11-19 00:08:44.459634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.915 [2024-11-19 00:08:44.459653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:37.915 [2024-11-19 00:08:44.464123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.915 [2024-11-19 00:08:44.464182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.915 [2024-11-19 00:08:44.464200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:37.915 [2024-11-19 00:08:44.468807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.915 [2024-11-19 00:08:44.468865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.915 [2024-11-19 00:08:44.468883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:37.915 [2024-11-19 00:08:44.473329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.915 [2024-11-19 00:08:44.473387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.915 [2024-11-19 00:08:44.473404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:37.915 [2024-11-19 00:08:44.477973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.915 [2024-11-19 00:08:44.478032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.915 [2024-11-19 00:08:44.478049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:37.915 [2024-11-19 00:08:44.482396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.915 [2024-11-19 00:08:44.482439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.915 [2024-11-19 00:08:44.482472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:37.915 [2024-11-19 00:08:44.487033] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.915 [2024-11-19 00:08:44.487091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.915 [2024-11-19 00:08:44.487108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:37.915 [2024-11-19 00:08:44.491530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.915 [2024-11-19 00:08:44.491589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.915 [2024-11-19 00:08:44.491607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:37.915 [2024-11-19 00:08:44.496096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.915 [2024-11-19 00:08:44.496330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.915 [2024-11-19 00:08:44.496354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:37.915 [2024-11-19 00:08:44.501049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.915 [2024-11-19 00:08:44.501108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.915 [2024-11-19 00:08:44.501126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:37.915 6587.00 IOPS, 823.38 MiB/s [2024-11-19T00:08:44.607Z] [2024-11-19 00:08:44.505826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.915 [2024-11-19 00:08:44.505886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.915 [2024-11-19 00:08:44.505903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:37.915 [2024-11-19 00:08:44.509346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.915 [2024-11-19 00:08:44.509406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.915 [2024-11-19 00:08:44.509423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:37.915 [2024-11-19 00:08:44.513990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.915 [2024-11-19 00:08:44.514050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.915 [2024-11-19 00:08:44.514067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:37.915 [2024-11-19 00:08:44.518506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.915 [2024-11-19 00:08:44.518566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.915 [2024-11-19 00:08:44.518583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:37.915 [2024-11-19 00:08:44.523061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.915 [2024-11-19 00:08:44.523252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.915 [2024-11-19 00:08:44.523275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:37.915 [2024-11-19 00:08:44.527945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.915 [2024-11-19 00:08:44.528007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.915 [2024-11-19 00:08:44.528039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:37.915 [2024-11-19 00:08:44.532718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.915 [2024-11-19 00:08:44.532762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.915 [2024-11-19 00:08:44.532780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:37.915 [2024-11-19 00:08:44.537179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.915 [2024-11-19 00:08:44.537239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.915 [2024-11-19 00:08:44.537256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:37.915 [2024-11-19 00:08:44.541734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.915 [2024-11-19 00:08:44.541793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.915 [2024-11-19 00:08:44.541812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:37.915 [2024-11-19 00:08:44.546401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.916 [2024-11-19 00:08:44.546591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.916 [2024-11-19 00:08:44.546647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:37.916 [2024-11-19 00:08:44.551232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.916 [2024-11-19 00:08:44.551292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.916 [2024-11-19 00:08:44.551309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:37.916 [2024-11-19 00:08:44.555795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.916 [2024-11-19 00:08:44.555855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.916 [2024-11-19 00:08:44.555873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:37.916 [2024-11-19 00:08:44.560419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.916 [2024-11-19 00:08:44.560469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.916 [2024-11-19 00:08:44.560488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:37.916 [2024-11-19 00:08:44.565070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.916 [2024-11-19 00:08:44.565129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.916 [2024-11-19 00:08:44.565146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:37.916 [2024-11-19 00:08:44.569593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.916 [2024-11-19 00:08:44.569662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.916 [2024-11-19 00:08:44.569679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:37.916 [2024-11-19 00:08:44.574025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.916 [2024-11-19 00:08:44.574085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.916 [2024-11-19 00:08:44.574102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:37.916 [2024-11-19 00:08:44.578628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.916 [2024-11-19 00:08:44.578686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.916 [2024-11-19 00:08:44.578703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:37.916 [2024-11-19 00:08:44.583092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.916 [2024-11-19 00:08:44.583152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.916 [2024-11-19 00:08:44.583169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:37.916 [2024-11-19 00:08:44.587670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.916 [2024-11-19 00:08:44.587729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.916 [2024-11-19 00:08:44.587747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:37.916 [2024-11-19 00:08:44.592101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.916 [2024-11-19 00:08:44.592161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.916 [2024-11-19 00:08:44.592178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:37.916 [2024-11-19 00:08:44.596826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:37.916 [2024-11-19 00:08:44.596885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.916 [2024-11-19 00:08:44.596903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:38.177 [2024-11-19 00:08:44.601846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.177 [2024-11-19 00:08:44.601907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.177 [2024-11-19 00:08:44.601940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:38.177 [2024-11-19 00:08:44.606593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.177 [2024-11-19 00:08:44.606678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.177 [2024-11-19 00:08:44.606714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:38.177 [2024-11-19 00:08:44.611525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.177 [2024-11-19 00:08:44.611769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.177 [2024-11-19 00:08:44.611793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:38.177 [2024-11-19 00:08:44.616432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.177 [2024-11-19 00:08:44.616498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.177 [2024-11-19 00:08:44.616518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:38.177 [2024-11-19 00:08:44.621057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.177 [2024-11-19 00:08:44.621115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.177 [2024-11-19 00:08:44.621133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:38.177 [2024-11-19 00:08:44.625548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.177 [2024-11-19 00:08:44.625608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.177 [2024-11-19 00:08:44.625658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:38.177 [2024-11-19 00:08:44.630205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.177 [2024-11-19 00:08:44.630266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.177 [2024-11-19 00:08:44.630284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:38.177 [2024-11-19 00:08:44.635083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.177 [2024-11-19 00:08:44.635278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.177 [2024-11-19 00:08:44.635301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:38.177 [2024-11-19 00:08:44.640370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.177 [2024-11-19 00:08:44.640437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.177 [2024-11-19 00:08:44.640458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:38.177 [2024-11-19 00:08:44.645576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.177 [2024-11-19 00:08:44.645675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.177 [2024-11-19 00:08:44.645698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:38.177 [2024-11-19 00:08:44.651008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.177 [2024-11-19 00:08:44.651055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.177 [2024-11-19 00:08:44.651073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:38.177 [2024-11-19 00:08:44.655935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.177 [2024-11-19 00:08:44.656010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.177 [2024-11-19 00:08:44.656058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:38.177 [2024-11-19 00:08:44.661027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.177 [2024-11-19 00:08:44.661086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.177 [2024-11-19 00:08:44.661103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:38.177 [2024-11-19 00:08:44.665981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.177 [2024-11-19 00:08:44.666210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.177 [2024-11-19 00:08:44.666233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:38.177 [2024-11-19 00:08:44.671085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.177 [2024-11-19 00:08:44.671145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.177 [2024-11-19 00:08:44.671162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:38.177 [2024-11-19 00:08:44.675800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.177 [2024-11-19 00:08:44.675861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.177 [2024-11-19 00:08:44.675879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:38.177 [2024-11-19 00:08:44.680463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.177 [2024-11-19 00:08:44.680527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.177 [2024-11-19 00:08:44.680546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:38.177 [2024-11-19 00:08:44.685142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.177 [2024-11-19 00:08:44.685337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.177 [2024-11-19 00:08:44.685361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:38.177 [2024-11-19 00:08:44.689974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.177 [2024-11-19 00:08:44.690034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.177 [2024-11-19 00:08:44.690051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:38.177 [2024-11-19 00:08:44.694573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.177 [2024-11-19 00:08:44.694661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.177 [2024-11-19 00:08:44.694679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:38.177 [2024-11-19 00:08:44.699158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.177 [2024-11-19 00:08:44.699217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.177 [2024-11-19 00:08:44.699235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:38.177 [2024-11-19 00:08:44.703770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.177 [2024-11-19 00:08:44.703829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.177 [2024-11-19 00:08:44.703846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:38.177 [2024-11-19 00:08:44.708398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.177 [2024-11-19 00:08:44.708461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.177 [2024-11-19 00:08:44.708479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:38.177 [2024-11-19 00:08:44.713263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.177 [2024-11-19 00:08:44.713321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.177 [2024-11-19 00:08:44.713339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:38.177 [2024-11-19 00:08:44.717946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.178 [2024-11-19 00:08:44.717991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.178 [2024-11-19 00:08:44.718009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:38.178 [2024-11-19 00:08:44.722589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.178 [2024-11-19 00:08:44.722657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.178 [2024-11-19 00:08:44.722675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:38.178 [2024-11-19 00:08:44.727029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.178 [2024-11-19 00:08:44.727088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.178 [2024-11-19 00:08:44.727105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:38.178 [2024-11-19 00:08:44.731483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.178 [2024-11-19 00:08:44.731541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.178 [2024-11-19 00:08:44.731558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:38.178 [2024-11-19 00:08:44.736043] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.178 [2024-11-19 00:08:44.736101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.178 [2024-11-19 00:08:44.736118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:38.178 [2024-11-19 00:08:44.740757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.178 [2024-11-19 00:08:44.740814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.178 [2024-11-19 00:08:44.740831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:38.178 [2024-11-19 00:08:44.745318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.178 [2024-11-19 00:08:44.745378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.178 [2024-11-19 00:08:44.745395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:38.178 [2024-11-19 00:08:44.749924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.178 [2024-11-19 00:08:44.749983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.178 [2024-11-19 00:08:44.750000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:38.178 [2024-11-19 00:08:44.754398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.178 [2024-11-19 00:08:44.754457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.178 [2024-11-19 00:08:44.754474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:38.178 [2024-11-19 00:08:44.759018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.178 [2024-11-19 00:08:44.759075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.178 [2024-11-19 00:08:44.759092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:38.178 [2024-11-19 00:08:44.763591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.178 [2024-11-19 00:08:44.763677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.178 [2024-11-19 00:08:44.763695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:38.178 [2024-11-19 00:08:44.768030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.178 [2024-11-19 00:08:44.768089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.178 [2024-11-19 00:08:44.768107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:38.178 [2024-11-19 00:08:44.772537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.178 [2024-11-19 00:08:44.772594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.178 [2024-11-19 00:08:44.772655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:38.178 [2024-11-19 00:08:44.777021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.178 [2024-11-19 00:08:44.777076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.178 [2024-11-19 00:08:44.777092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:38.178 [2024-11-19 00:08:44.781506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.178 [2024-11-19 00:08:44.781563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.178 [2024-11-19 00:08:44.781580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:38.178 [2024-11-19 00:08:44.786072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.178 [2024-11-19 00:08:44.786126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.178 [2024-11-19 00:08:44.786142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:38.178 [2024-11-19 00:08:44.790565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.178 [2024-11-19 00:08:44.790630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.178 [2024-11-19 00:08:44.790647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:38.178 [2024-11-19 00:08:44.795012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.178 [2024-11-19 00:08:44.795067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.178 [2024-11-19 00:08:44.795083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:38.178 [2024-11-19 00:08:44.799569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.178 [2024-11-19 00:08:44.799651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.178 [2024-11-19 00:08:44.799668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:38.178 [2024-11-19 00:08:44.804067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.178 [2024-11-19 00:08:44.804121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.178 [2024-11-19 00:08:44.804137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:38.178 [2024-11-19 00:08:44.808685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.178 [2024-11-19 00:08:44.808737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.178 [2024-11-19 00:08:44.808753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:38.178 [2024-11-19 00:08:44.813191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.178 [2024-11-19 00:08:44.813246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.178 [2024-11-19 00:08:44.813262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:38.178 [2024-11-19 00:08:44.817709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.178 [2024-11-19 00:08:44.817763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.178 [2024-11-19 00:08:44.817779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:38.178 [2024-11-19 00:08:44.822201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.178 [2024-11-19 00:08:44.822255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.178 [2024-11-19 00:08:44.822271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:38.178 [2024-11-19 00:08:44.826780] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.178 [2024-11-19 00:08:44.826834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.178 [2024-11-19 00:08:44.826850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:38.178 [2024-11-19 00:08:44.831277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.178 [2024-11-19 00:08:44.831332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.178 [2024-11-19 00:08:44.831348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:38.178 [2024-11-19 00:08:44.835852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.178 [2024-11-19 00:08:44.835908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.179 [2024-11-19 00:08:44.835924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:38.179 [2024-11-19 00:08:44.840414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.179 [2024-11-19 00:08:44.840470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.179 [2024-11-19 00:08:44.840486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:38.179 [2024-11-19 00:08:44.845006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.179 [2024-11-19 00:08:44.845060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.179 [2024-11-19 00:08:44.845092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:38.179 [2024-11-19 00:08:44.849483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.179 [2024-11-19 00:08:44.849537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.179 [2024-11-19 00:08:44.849553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:38.179 [2024-11-19 00:08:44.854026] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.179 [2024-11-19 00:08:44.854081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.179 [2024-11-19 00:08:44.854097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:38.179 [2024-11-19 00:08:44.858590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.179 [2024-11-19 00:08:44.858654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.179 [2024-11-19 00:08:44.858671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:38.443 [2024-11-19 00:08:44.863602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.443 [2024-11-19 00:08:44.863686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.444 [2024-11-19 00:08:44.863703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:38.444 [2024-11-19 00:08:44.868440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.444 [2024-11-19 00:08:44.868483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.444 [2024-11-19 00:08:44.868500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:38.444 [2024-11-19 00:08:44.873444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.444 [2024-11-19 00:08:44.873500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.444 [2024-11-19 00:08:44.873516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:38.444 [2024-11-19 00:08:44.878072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.444 [2024-11-19 00:08:44.878127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.444 [2024-11-19 00:08:44.878143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:38.444 [2024-11-19 00:08:44.882745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.444 [2024-11-19 00:08:44.882801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.444 [2024-11-19 00:08:44.882818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:38.444 [2024-11-19 00:08:44.887363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.444 [2024-11-19 00:08:44.887419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.444 [2024-11-19 00:08:44.887435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:38.444 [2024-11-19 00:08:44.892053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.444 [2024-11-19 00:08:44.892108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.444 [2024-11-19 00:08:44.892124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:38.444 [2024-11-19 00:08:44.896544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.444 [2024-11-19 00:08:44.896640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.444 [2024-11-19 00:08:44.896673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:38.444 [2024-11-19 00:08:44.901121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.444 [2024-11-19 00:08:44.901175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.444 [2024-11-19 00:08:44.901191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:38.444 [2024-11-19 00:08:44.905587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.444 [2024-11-19 00:08:44.905652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.444 [2024-11-19 00:08:44.905668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:38.444 [2024-11-19 00:08:44.910159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.444 [2024-11-19 00:08:44.910214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.444 [2024-11-19 00:08:44.910230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:38.444 [2024-11-19 00:08:44.914692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.444 [2024-11-19 00:08:44.914745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.444 [2024-11-19 00:08:44.914762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:38.444 [2024-11-19 00:08:44.919132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.444 [2024-11-19 00:08:44.919187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.444 [2024-11-19 00:08:44.919202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:38.444 [2024-11-19 00:08:44.923577] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.444 [2024-11-19 00:08:44.923658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.444 [2024-11-19 00:08:44.923675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:38.444 [2024-11-19 00:08:44.928070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.444 [2024-11-19 00:08:44.928125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.444 [2024-11-19 00:08:44.928141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:38.444 [2024-11-19 00:08:44.932590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.444 [2024-11-19 00:08:44.932657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.444 [2024-11-19 00:08:44.932673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:38.444 [2024-11-19 00:08:44.937042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.444 [2024-11-19 00:08:44.937096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.444 [2024-11-19 00:08:44.937112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:38.444 [2024-11-19 00:08:44.941535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.444 [2024-11-19 00:08:44.941590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.444 [2024-11-19 00:08:44.941605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:38.444 [2024-11-19 00:08:44.946080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.444 [2024-11-19 00:08:44.946134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.444 [2024-11-19 00:08:44.946150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:38.444 [2024-11-19 00:08:44.950630] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.444 [2024-11-19 00:08:44.950684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.444 [2024-11-19 00:08:44.950700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:38.444 [2024-11-19 00:08:44.955045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.444 [2024-11-19 00:08:44.955100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.444 [2024-11-19 00:08:44.955116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:38.444 [2024-11-19 00:08:44.959575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.444 [2024-11-19 00:08:44.959655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.444 [2024-11-19 00:08:44.959673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:38.444 [2024-11-19 00:08:44.964115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.444 [2024-11-19 00:08:44.964169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.444 [2024-11-19 00:08:44.964185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:38.444 [2024-11-19 00:08:44.968691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.444 [2024-11-19 00:08:44.968745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.444 [2024-11-19 00:08:44.968760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:38.444 [2024-11-19 00:08:44.973105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.444 [2024-11-19 00:08:44.973159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.444 [2024-11-19 00:08:44.973175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:38.444 [2024-11-19 00:08:44.977726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.444 [2024-11-19 00:08:44.977781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.444 [2024-11-19 00:08:44.977798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:38.444 [2024-11-19 00:08:44.982299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.444 [2024-11-19 00:08:44.982356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.444 [2024-11-19 00:08:44.982372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:38.445 [2024-11-19 00:08:44.986824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.445 [2024-11-19 00:08:44.986878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.445 [2024-11-19 00:08:44.986893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:38.445 [2024-11-19 00:08:44.991273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.445 [2024-11-19 00:08:44.991328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.445 [2024-11-19 00:08:44.991344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:38.445 [2024-11-19 00:08:44.995966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.445 [2024-11-19 00:08:44.996037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.445 [2024-11-19 00:08:44.996053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:38.445 [2024-11-19 00:08:45.000686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.445 [2024-11-19 00:08:45.000738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.445 [2024-11-19 00:08:45.000754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:38.445 [2024-11-19 00:08:45.005334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.445 [2024-11-19 00:08:45.005388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.445 [2024-11-19 00:08:45.005403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:38.445 [2024-11-19 00:08:45.009902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.445 [2024-11-19 00:08:45.009956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.445 [2024-11-19 00:08:45.009973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:38.445 [2024-11-19 00:08:45.014540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.445 [2024-11-19 00:08:45.014594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.445 [2024-11-19 00:08:45.014620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:38.445 [2024-11-19 00:08:45.019011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.445 [2024-11-19 00:08:45.019067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.445 [2024-11-19 00:08:45.019083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:38.445 [2024-11-19 00:08:45.023515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.445 [2024-11-19 00:08:45.023571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.445 [2024-11-19 00:08:45.023586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:38.445 [2024-11-19 00:08:45.028046] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.445 [2024-11-19 00:08:45.028100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.445 [2024-11-19 00:08:45.028116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:38.445 [2024-11-19 00:08:45.032760] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.445 [2024-11-19 00:08:45.032800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.445 [2024-11-19 00:08:45.032816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:38.445 [2024-11-19 00:08:45.037317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.445 [2024-11-19 00:08:45.037372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.445 [2024-11-19 00:08:45.037387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:38.445 [2024-11-19 00:08:45.041923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.445 [2024-11-19 00:08:45.041991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.445 [2024-11-19 00:08:45.042007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:38.445 [2024-11-19 00:08:45.046489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.445 [2024-11-19 00:08:45.046543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.445 [2024-11-19 00:08:45.046559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:38.445 [2024-11-19 00:08:45.050980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.445 [2024-11-19 00:08:45.051034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.445 [2024-11-19 00:08:45.051050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:38.445 [2024-11-19 00:08:45.055442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.445 [2024-11-19 00:08:45.055498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.445 [2024-11-19 00:08:45.055514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:38.445 [2024-11-19 00:08:45.060070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.445 [2024-11-19 00:08:45.060124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.445 [2024-11-19 00:08:45.060140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:38.445 [2024-11-19 00:08:45.064806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.445 [2024-11-19 00:08:45.064859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.445 [2024-11-19 00:08:45.064875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:38.445 [2024-11-19 00:08:45.069255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.445 [2024-11-19 00:08:45.069310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.445 [2024-11-19 00:08:45.069325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:38.445 [2024-11-19 00:08:45.073865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.445 [2024-11-19 00:08:45.073920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.445 [2024-11-19 00:08:45.073936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:38.445 [2024-11-19 00:08:45.078382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.445 [2024-11-19 00:08:45.078436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.445 [2024-11-19 00:08:45.078452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:38.445 [2024-11-19 00:08:45.083015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.445 [2024-11-19 00:08:45.083070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.445 [2024-11-19 00:08:45.083086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:38.445 [2024-11-19 00:08:45.087485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.445 [2024-11-19 00:08:45.087539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.445 [2024-11-19 00:08:45.087555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:38.445 [2024-11-19 00:08:45.092058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.445 [2024-11-19 00:08:45.092112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.445 [2024-11-19 00:08:45.092128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:38.445 [2024-11-19 00:08:45.096578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.445 [2024-11-19 00:08:45.096677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.445 [2024-11-19 00:08:45.096694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:38.445 [2024-11-19 00:08:45.101116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.445 [2024-11-19 00:08:45.101171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.445 [2024-11-19 00:08:45.101187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:38.445 [2024-11-19 00:08:45.105607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.445 [2024-11-19 00:08:45.105661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.446 [2024-11-19 00:08:45.105677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:38.446 [2024-11-19 00:08:45.110027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.446 [2024-11-19 00:08:45.110081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.446 [2024-11-19 00:08:45.110097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:38.446 [2024-11-19 00:08:45.114545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.446 [2024-11-19 00:08:45.114599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.446 [2024-11-19 00:08:45.114627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:38.446 [2024-11-19 00:08:45.118926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.446 [2024-11-19 00:08:45.118980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.446 [2024-11-19 00:08:45.118995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:38.446 [2024-11-19 00:08:45.123700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.446 [2024-11-19 00:08:45.123755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.446 [2024-11-19 00:08:45.123771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:38.728 [2024-11-19 00:08:45.129344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.728 [2024-11-19 00:08:45.129410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.728 [2024-11-19 00:08:45.129429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:38.728 [2024-11-19 00:08:45.134996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.728 [2024-11-19 00:08:45.135059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.728 [2024-11-19 00:08:45.135077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:38.728 [2024-11-19 00:08:45.140603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.728 [2024-11-19 00:08:45.140723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.728 [2024-11-19 00:08:45.140743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:38.728 [2024-11-19 00:08:45.146062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.728 [2024-11-19 00:08:45.146120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.728 [2024-11-19 00:08:45.146137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:38.728 [2024-11-19 00:08:45.150855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.728 [2024-11-19 00:08:45.150913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.728 [2024-11-19 00:08:45.150930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:38.728 [2024-11-19 00:08:45.155524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.728 [2024-11-19 00:08:45.155580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.729 [2024-11-19 00:08:45.155597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:38.729 [2024-11-19 00:08:45.160310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.729 [2024-11-19 00:08:45.160371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.729 [2024-11-19 00:08:45.160389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:38.729 [2024-11-19 00:08:45.165045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.729 [2024-11-19 00:08:45.165100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.729 [2024-11-19 00:08:45.165116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:38.729 [2024-11-19 00:08:45.169646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.729 [2024-11-19 00:08:45.169700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.729 [2024-11-19 00:08:45.169715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:38.729 [2024-11-19 00:08:45.174066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.729 [2024-11-19 00:08:45.174121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.729 [2024-11-19 00:08:45.174137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:38.729 [2024-11-19 00:08:45.178728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.729 [2024-11-19 00:08:45.178782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.729 [2024-11-19 00:08:45.178797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:38.729 [2024-11-19 00:08:45.183139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.729 [2024-11-19 00:08:45.183195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.729 [2024-11-19 00:08:45.183210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:38.729 [2024-11-19 00:08:45.188001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.729 [2024-11-19 00:08:45.188060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.729 [2024-11-19 00:08:45.188078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:38.729 [2024-11-19 00:08:45.193087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.729 [2024-11-19 00:08:45.193143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.729 [2024-11-19 00:08:45.193159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:38.729 [2024-11-19 00:08:45.198108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.729 [2024-11-19 00:08:45.198164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.729 [2024-11-19 00:08:45.198179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:38.729 [2024-11-19 00:08:45.202965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.729 [2024-11-19 00:08:45.203023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.729 [2024-11-19 00:08:45.203040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:38.729 [2024-11-19 00:08:45.208140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.729 [2024-11-19 00:08:45.208198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.729 [2024-11-19 00:08:45.208217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:38.729 [2024-11-19 00:08:45.213421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.729 [2024-11-19 00:08:45.213476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.729 [2024-11-19 00:08:45.213492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:38.729 [2024-11-19 00:08:45.218489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.729 [2024-11-19 00:08:45.218545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.729 [2024-11-19 00:08:45.218562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:38.729 [2024-11-19 00:08:45.223385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.729 [2024-11-19 00:08:45.223441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.729 [2024-11-19 00:08:45.223457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:38.729 [2024-11-19 00:08:45.228238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.729 [2024-11-19 00:08:45.228319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.729 [2024-11-19 00:08:45.228355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:38.729 [2024-11-19 00:08:45.233017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.729 [2024-11-19 00:08:45.233073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.729 [2024-11-19 00:08:45.233090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:38.729 [2024-11-19 00:08:45.237630] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.729 [2024-11-19 00:08:45.237685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.729 [2024-11-19 00:08:45.237701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:38.729 [2024-11-19 00:08:45.242133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.729 [2024-11-19 00:08:45.242189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.729 [2024-11-19 00:08:45.242216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:38.729 [2024-11-19 00:08:45.246967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.729 [2024-11-19 00:08:45.247022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.729 [2024-11-19 00:08:45.247038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:38.729 [2024-11-19 00:08:45.251674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.729 [2024-11-19 00:08:45.251730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.729 [2024-11-19 00:08:45.251746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:38.729 [2024-11-19 00:08:45.256189] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.729 [2024-11-19 00:08:45.256245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.729 [2024-11-19 00:08:45.256261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:38.729 [2024-11-19 00:08:45.260913] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.729 [2024-11-19 00:08:45.260969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.729 [2024-11-19 00:08:45.260986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:38.729 [2024-11-19 00:08:45.265489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.729 [2024-11-19 00:08:45.265545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.729 [2024-11-19 00:08:45.265562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:38.729 [2024-11-19 00:08:45.270083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.729 [2024-11-19 00:08:45.270139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.729 [2024-11-19 00:08:45.270155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:38.729 [2024-11-19 00:08:45.274851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.729 [2024-11-19 00:08:45.274906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.729 [2024-11-19 00:08:45.274922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:38.729 [2024-11-19 00:08:45.279458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.729 [2024-11-19 00:08:45.279514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.729 [2024-11-19 00:08:45.279531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:38.729 [2024-11-19 00:08:45.284003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.730 [2024-11-19 00:08:45.284060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.730 [2024-11-19 00:08:45.284076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:38.730 [2024-11-19 00:08:45.288653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.730 [2024-11-19 00:08:45.288700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.730 [2024-11-19 00:08:45.288718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:38.730 [2024-11-19 00:08:45.293390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.730 [2024-11-19 00:08:45.293446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.730 [2024-11-19 00:08:45.293463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:38.730 [2024-11-19 00:08:45.298156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.730 [2024-11-19 00:08:45.298212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.730 [2024-11-19 00:08:45.298228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:38.730 [2024-11-19 00:08:45.302836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.730 [2024-11-19 00:08:45.302881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.730 [2024-11-19 00:08:45.302898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:38.730 [2024-11-19 00:08:45.307446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.730 [2024-11-19 00:08:45.307502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.730 [2024-11-19 00:08:45.307518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:38.730 [2024-11-19 00:08:45.312078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.730 [2024-11-19 00:08:45.312135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.730 [2024-11-19 00:08:45.312152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:38.730 [2024-11-19 00:08:45.316730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.730 [2024-11-19 00:08:45.316784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.730 [2024-11-19 00:08:45.316800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:38.730 [2024-11-19 00:08:45.321476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.730 [2024-11-19 00:08:45.321516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.730 [2024-11-19 00:08:45.321534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:38.730 [2024-11-19 00:08:45.326208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.730 [2024-11-19 00:08:45.326264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.730 [2024-11-19 00:08:45.326280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:38.730 [2024-11-19 00:08:45.330859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.730 [2024-11-19 00:08:45.330915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.730 [2024-11-19 00:08:45.330931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:38.730 [2024-11-19 00:08:45.335474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.730 [2024-11-19 00:08:45.335530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.730 [2024-11-19 00:08:45.335546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:38.730 [2024-11-19 00:08:45.340129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.730 [2024-11-19 00:08:45.340185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.730 [2024-11-19 00:08:45.340201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:38.730 [2024-11-19 00:08:45.344850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.730 [2024-11-19 00:08:45.344900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.730 [2024-11-19 00:08:45.344917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:38.730 [2024-11-19 00:08:45.349716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.730 [2024-11-19 00:08:45.349771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.730 [2024-11-19 00:08:45.349787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:38.730 [2024-11-19 00:08:45.354301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.730 [2024-11-19 00:08:45.354356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.730 [2024-11-19 00:08:45.354372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:38.730 [2024-11-19 00:08:45.359001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.730 [2024-11-19 00:08:45.359058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.730 [2024-11-19 00:08:45.359074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:38.730 [2024-11-19 00:08:45.363607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.730 [2024-11-19 00:08:45.363662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.730 [2024-11-19 00:08:45.363679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:38.730 [2024-11-19 00:08:45.368063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.730 [2024-11-19 00:08:45.368118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.730 [2024-11-19 00:08:45.368135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:38.730 [2024-11-19 00:08:45.372899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.730 [2024-11-19 00:08:45.372954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.730 [2024-11-19 00:08:45.372970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:38.730 [2024-11-19 00:08:45.377554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.730 [2024-11-19 00:08:45.377620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.730 [2024-11-19 00:08:45.377640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:38.730 [2024-11-19 00:08:45.382161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.730 [2024-11-19 00:08:45.382217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.730 [2024-11-19 00:08:45.382233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:38.730 [2024-11-19 00:08:45.386845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.730 [2024-11-19 00:08:45.386901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.730 [2024-11-19 00:08:45.386917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:38.730 [2024-11-19 00:08:45.391659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.730 [2024-11-19 00:08:45.391714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.730 [2024-11-19 00:08:45.391746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:38.730 [2024-11-19 00:08:45.396339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.730 [2024-11-19 00:08:45.396382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.730 [2024-11-19 00:08:45.396399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:38.730 [2024-11-19 00:08:45.401132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.730 [2024-11-19 00:08:45.401188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.730 [2024-11-19 00:08:45.401204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:38.730 [2024-11-19 00:08:45.407414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:38.730 [2024-11-19 00:08:45.407475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.730 [2024-11-19 00:08:45.407495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:39.002 [2024-11-19 00:08:45.413272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:39.002 [2024-11-19 00:08:45.413350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.002 [2024-11-19 00:08:45.413370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:39.002 [2024-11-19 00:08:45.418489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:39.002 [2024-11-19 00:08:45.418548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.002 [2024-11-19 00:08:45.418566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:39.002 [2024-11-19 00:08:45.424183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:39.002 [2024-11-19 00:08:45.424243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.002 [2024-11-19 00:08:45.424261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:39.002 [2024-11-19 00:08:45.430256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:39.002 [2024-11-19 00:08:45.430349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.002 [2024-11-19 00:08:45.430368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:39.002 [2024-11-19 00:08:45.435795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:39.002 [2024-11-19 00:08:45.435851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.002 [2024-11-19 00:08:45.435868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:39.002 [2024-11-19 00:08:45.440658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:39.002 [2024-11-19 00:08:45.440715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.002 [2024-11-19 00:08:45.440731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:39.002 [2024-11-19 00:08:45.445473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:39.002 [2024-11-19 00:08:45.445529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.002 [2024-11-19 00:08:45.445545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:39.002 [2024-11-19 00:08:45.450320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:39.002 [2024-11-19 00:08:45.450377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.002 [2024-11-19 00:08:45.450411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:39.002 [2024-11-19 00:08:45.455250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:39.002 [2024-11-19 00:08:45.455291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.002 [2024-11-19 00:08:45.455310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:39.002 [2024-11-19 00:08:45.460327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:39.002 [2024-11-19 00:08:45.460377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.002 [2024-11-19 00:08:45.460395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:39.002 [2024-11-19 00:08:45.465302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:39.002 [2024-11-19 00:08:45.465355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.002 [2024-11-19 00:08:45.465371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:39.002 [2024-11-19 00:08:45.469836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:39.002 [2024-11-19 00:08:45.469890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.002 [2024-11-19 00:08:45.469906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:39.002 [2024-11-19 00:08:45.474335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:39.002 [2024-11-19 00:08:45.474389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.002 [2024-11-19 00:08:45.474406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:39.002 [2024-11-19 00:08:45.478988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:39.002 [2024-11-19 00:08:45.479043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.002 [2024-11-19 00:08:45.479059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:39.002 [2024-11-19 00:08:45.483463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:39.002 [2024-11-19 00:08:45.483517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.002 [2024-11-19 00:08:45.483533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:39.002 [2024-11-19 00:08:45.487989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:39.002 [2024-11-19 00:08:45.488044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.003 [2024-11-19 00:08:45.488059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:39.003 [2024-11-19 00:08:45.492531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:39.003 [2024-11-19 00:08:45.492573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.003 [2024-11-19 00:08:45.492589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:39.003 [2024-11-19 00:08:45.497264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:39.003 [2024-11-19 00:08:45.497319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.003 [2024-11-19 00:08:45.497335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:39.003 6582.50 IOPS, 822.81 MiB/s [2024-11-19T00:08:45.695Z] [2024-11-19 00:08:45.503269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:24:39.003 [2024-11-19 00:08:45.503304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.003 [2024-11-19 00:08:45.503320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:39.003 00:24:39.003 Latency(us) 00:24:39.003 [2024-11-19T00:08:45.695Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:39.003 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:24:39.003 nvme0n1 : 2.00 6582.05 822.76 0.00 0.00 2426.88 975.59 6791.91 00:24:39.003 [2024-11-19T00:08:45.695Z] =================================================================================================================== 00:24:39.003 [2024-11-19T00:08:45.695Z] Total : 6582.05 822.76 0.00 0.00 2426.88 975.59 6791.91 00:24:39.003 { 00:24:39.003 "results": [ 00:24:39.003 { 00:24:39.003 "job": "nvme0n1", 00:24:39.003 "core_mask": "0x2", 00:24:39.003 "workload": "randread", 00:24:39.003 "status": "finished", 00:24:39.003 "queue_depth": 16, 00:24:39.003 "io_size": 131072, 00:24:39.003 "runtime": 2.002569, 00:24:39.003 "iops": 6582.045362731571, 00:24:39.003 "mibps": 822.7556703414464, 00:24:39.003 "io_failed": 0, 00:24:39.003 "io_timeout": 0, 00:24:39.003 "avg_latency_us": 2426.876680621556, 00:24:39.003 "min_latency_us": 975.5927272727273, 00:24:39.003 "max_latency_us": 6791.912727272727 00:24:39.003 } 00:24:39.003 ], 00:24:39.003 "core_count": 1 00:24:39.003 } 00:24:39.003 00:08:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:39.003 00:08:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:39.003 00:08:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:39.003 00:08:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:39.003 | .driver_specific 00:24:39.003 | .nvme_error 00:24:39.003 | .status_code 00:24:39.003 | .command_transient_transport_error' 00:24:39.262 00:08:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 426 > 0 )) 00:24:39.262 00:08:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 86440 00:24:39.262 00:08:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 86440 ']' 00:24:39.262 00:08:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 86440 00:24:39.262 00:08:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:24:39.262 00:08:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:39.262 00:08:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86440 00:24:39.262 00:08:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:39.262 killing process with pid 86440 00:24:39.262 00:08:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:39.262 00:08:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86440' 00:24:39.262 00:08:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 86440 00:24:39.262 Received shutdown signal, test time was about 2.000000 seconds 00:24:39.262 00:24:39.262 Latency(us) 00:24:39.262 [2024-11-19T00:08:45.954Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:39.262 [2024-11-19T00:08:45.954Z] =================================================================================================================== 00:24:39.262 [2024-11-19T00:08:45.954Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:39.262 00:08:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 86440 00:24:40.199 00:08:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:24:40.199 00:08:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:40.199 00:08:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:24:40.199 00:08:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:24:40.199 00:08:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:24:40.199 00:08:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=86502 00:24:40.199 00:08:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:24:40.199 00:08:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 86502 /var/tmp/bperf.sock 00:24:40.199 00:08:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 86502 ']' 00:24:40.199 00:08:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:40.199 00:08:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:40.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:40.199 00:08:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:40.199 00:08:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:40.199 00:08:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:40.199 [2024-11-19 00:08:46.704547] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:24:40.199 [2024-11-19 00:08:46.704743] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86502 ] 00:24:40.199 [2024-11-19 00:08:46.883234] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:40.459 [2024-11-19 00:08:46.964426] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:40.459 [2024-11-19 00:08:47.107270] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:41.028 00:08:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:41.028 00:08:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:24:41.028 00:08:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:41.028 00:08:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:41.287 00:08:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:41.287 00:08:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.287 00:08:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:41.287 00:08:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.287 00:08:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:41.287 00:08:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:41.547 nvme0n1 00:24:41.547 00:08:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:24:41.547 00:08:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.547 00:08:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:41.547 00:08:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.547 00:08:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:41.547 00:08:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:41.807 Running I/O for 2 seconds... 00:24:41.807 [2024-11-19 00:08:48.274312] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfda78 00:24:41.807 [2024-11-19 00:08:48.275892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:12596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.807 [2024-11-19 00:08:48.275946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:41.807 [2024-11-19 00:08:48.290980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfe2e8 00:24:41.807 [2024-11-19 00:08:48.292520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.807 [2024-11-19 00:08:48.292588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.807 [2024-11-19 00:08:48.307368] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfeb58 00:24:41.807 [2024-11-19 00:08:48.309006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11019 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.807 [2024-11-19 00:08:48.309084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:41.807 [2024-11-19 00:08:48.330536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfef90 00:24:41.807 [2024-11-19 00:08:48.333322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.807 [2024-11-19 00:08:48.333387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:41.807 [2024-11-19 00:08:48.346775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfeb58 00:24:41.807 [2024-11-19 00:08:48.349795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23757 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.807 [2024-11-19 00:08:48.349859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:41.807 [2024-11-19 00:08:48.363376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfe2e8 00:24:41.807 [2024-11-19 00:08:48.366164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.807 [2024-11-19 00:08:48.366225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:41.807 [2024-11-19 00:08:48.379848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfda78 00:24:41.807 [2024-11-19 00:08:48.382518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:10729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.807 [2024-11-19 00:08:48.382575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:41.807 [2024-11-19 00:08:48.396054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfd208 00:24:41.807 [2024-11-19 00:08:48.398975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17376 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.807 [2024-11-19 00:08:48.399031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:41.807 [2024-11-19 00:08:48.412769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfc998 00:24:41.807 [2024-11-19 00:08:48.415247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:19476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.807 [2024-11-19 00:08:48.415313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:41.807 [2024-11-19 00:08:48.429109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfc128 00:24:41.807 [2024-11-19 00:08:48.431623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:7685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.807 [2024-11-19 00:08:48.431713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:41.807 [2024-11-19 00:08:48.445313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfb8b8 00:24:41.807 [2024-11-19 00:08:48.447918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:19893 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.807 [2024-11-19 00:08:48.447962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:41.807 [2024-11-19 00:08:48.461571] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfb048 00:24:41.807 [2024-11-19 00:08:48.464130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:12780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.807 [2024-11-19 00:08:48.464186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:41.807 [2024-11-19 00:08:48.477815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfa7d8 00:24:41.807 [2024-11-19 00:08:48.480402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:24406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.807 [2024-11-19 00:08:48.480676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:42.067 [2024-11-19 00:08:48.495063] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf9f68 00:24:42.067 [2024-11-19 00:08:48.498168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:8648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.067 [2024-11-19 00:08:48.498230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:42.067 [2024-11-19 00:08:48.512307] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf96f8 00:24:42.067 [2024-11-19 00:08:48.514940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:20499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.067 [2024-11-19 00:08:48.515002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:42.067 [2024-11-19 00:08:48.528908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf8e88 00:24:42.067 [2024-11-19 00:08:48.531347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:1469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.067 [2024-11-19 00:08:48.531387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:42.068 [2024-11-19 00:08:48.545202] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf8618 00:24:42.068 [2024-11-19 00:08:48.547710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:3342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.068 [2024-11-19 00:08:48.547753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:42.068 [2024-11-19 00:08:48.561579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf7da8 00:24:42.068 [2024-11-19 00:08:48.564098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:11972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.068 [2024-11-19 00:08:48.564153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:42.068 [2024-11-19 00:08:48.578339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf7538 00:24:42.068 [2024-11-19 00:08:48.580928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:6104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.068 [2024-11-19 00:08:48.580986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:42.068 [2024-11-19 00:08:48.594685] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf6cc8 00:24:42.068 [2024-11-19 00:08:48.597055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:14501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.068 [2024-11-19 00:08:48.597115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:42.068 [2024-11-19 00:08:48.610955] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf6458 00:24:42.068 [2024-11-19 00:08:48.613442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:24551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.068 [2024-11-19 00:08:48.613483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:42.068 [2024-11-19 00:08:48.627384] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf5be8 00:24:42.068 [2024-11-19 00:08:48.629877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:1759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.068 [2024-11-19 00:08:48.630066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:42.068 [2024-11-19 00:08:48.643985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf5378 00:24:42.068 [2024-11-19 00:08:48.646544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.068 [2024-11-19 00:08:48.646609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:42.068 [2024-11-19 00:08:48.660659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf4b08 00:24:42.068 [2024-11-19 00:08:48.662908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:18594 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.068 [2024-11-19 00:08:48.662970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:42.068 [2024-11-19 00:08:48.676884] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf4298 00:24:42.068 [2024-11-19 00:08:48.679128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:21693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.068 [2024-11-19 00:08:48.679168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:42.068 [2024-11-19 00:08:48.693102] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf3a28 00:24:42.068 [2024-11-19 00:08:48.695375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.068 [2024-11-19 00:08:48.695417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:42.068 [2024-11-19 00:08:48.709938] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf31b8 00:24:42.068 [2024-11-19 00:08:48.712501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:1230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.068 [2024-11-19 00:08:48.712546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:42.068 [2024-11-19 00:08:48.728132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf2948 00:24:42.068 [2024-11-19 00:08:48.730884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:9827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.068 [2024-11-19 00:08:48.730938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:42.068 [2024-11-19 00:08:48.747166] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf20d8 00:24:42.068 [2024-11-19 00:08:48.749552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:11420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.068 [2024-11-19 00:08:48.749824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:42.328 [2024-11-19 00:08:48.765527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf1868 00:24:42.328 [2024-11-19 00:08:48.767831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:21635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.328 [2024-11-19 00:08:48.767895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:42.328 [2024-11-19 00:08:48.782540] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf0ff8 00:24:42.328 [2024-11-19 00:08:48.784826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:2969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.328 [2024-11-19 00:08:48.784889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:42.328 [2024-11-19 00:08:48.798871] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf0788 00:24:42.328 [2024-11-19 00:08:48.801316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:15178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.328 [2024-11-19 00:08:48.801357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:42.328 [2024-11-19 00:08:48.815403] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016beff18 00:24:42.328 [2024-11-19 00:08:48.817714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:25278 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.328 [2024-11-19 00:08:48.817755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:42.328 [2024-11-19 00:08:48.831857] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bef6a8 00:24:42.328 [2024-11-19 00:08:48.834003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:6219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.328 [2024-11-19 00:08:48.834067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:42.328 [2024-11-19 00:08:48.848141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016beee38 00:24:42.328 [2024-11-19 00:08:48.850326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:17143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.328 [2024-11-19 00:08:48.850375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:42.328 [2024-11-19 00:08:48.864757] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bee5c8 00:24:42.328 [2024-11-19 00:08:48.866806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:25444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.328 [2024-11-19 00:08:48.866869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:42.328 [2024-11-19 00:08:48.881739] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bedd58 00:24:42.328 [2024-11-19 00:08:48.883885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:8848 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.328 [2024-11-19 00:08:48.883943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:42.328 [2024-11-19 00:08:48.898643] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bed4e8 00:24:42.328 [2024-11-19 00:08:48.900775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:14257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.328 [2024-11-19 00:08:48.900831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:42.328 [2024-11-19 00:08:48.914994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016becc78 00:24:42.328 [2024-11-19 00:08:48.917111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:14764 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.328 [2024-11-19 00:08:48.917174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:42.328 [2024-11-19 00:08:48.931768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec408 00:24:42.328 [2024-11-19 00:08:48.934220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:11123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.328 [2024-11-19 00:08:48.934286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:42.328 [2024-11-19 00:08:48.950456] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bebb98 00:24:42.329 [2024-11-19 00:08:48.952804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:4525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.329 [2024-11-19 00:08:48.952870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:42.329 [2024-11-19 00:08:48.968190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016beb328 00:24:42.329 [2024-11-19 00:08:48.970312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:20235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.329 [2024-11-19 00:08:48.970374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:42.329 [2024-11-19 00:08:48.985762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016beaab8 00:24:42.329 [2024-11-19 00:08:48.987913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:16299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.329 [2024-11-19 00:08:48.987973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:42.329 [2024-11-19 00:08:49.003318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bea248 00:24:42.329 [2024-11-19 00:08:49.005721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:22244 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.329 [2024-11-19 00:08:49.005780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:42.589 [2024-11-19 00:08:49.022660] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be99d8 00:24:42.589 [2024-11-19 00:08:49.024762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:17888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.589 [2024-11-19 00:08:49.024820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:42.589 [2024-11-19 00:08:49.040181] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be9168 00:24:42.589 [2024-11-19 00:08:49.042290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:12386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.589 [2024-11-19 00:08:49.042348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:42.589 [2024-11-19 00:08:49.057592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be88f8 00:24:42.589 [2024-11-19 00:08:49.059480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:22553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.589 [2024-11-19 00:08:49.059546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:42.589 [2024-11-19 00:08:49.074759] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be8088 00:24:42.589 [2024-11-19 00:08:49.076727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:2520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.589 [2024-11-19 00:08:49.076785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:42.589 [2024-11-19 00:08:49.091973] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be7818 00:24:42.589 [2024-11-19 00:08:49.093901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:11045 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.589 [2024-11-19 00:08:49.093944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:42.589 [2024-11-19 00:08:49.109367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be6fa8 00:24:42.589 [2024-11-19 00:08:49.111286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:10606 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.589 [2024-11-19 00:08:49.111339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:42.589 [2024-11-19 00:08:49.126740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be6738 00:24:42.589 [2024-11-19 00:08:49.128601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:19272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.589 [2024-11-19 00:08:49.128708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:42.589 [2024-11-19 00:08:49.143782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be5ec8 00:24:42.589 [2024-11-19 00:08:49.145756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:15133 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.589 [2024-11-19 00:08:49.145809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:42.589 [2024-11-19 00:08:49.161045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be5658 00:24:42.589 [2024-11-19 00:08:49.162861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:15784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.589 [2024-11-19 00:08:49.162913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:42.589 [2024-11-19 00:08:49.178632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be4de8 00:24:42.589 [2024-11-19 00:08:49.180408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.589 [2024-11-19 00:08:49.180469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:42.589 [2024-11-19 00:08:49.195652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be4578 00:24:42.589 [2024-11-19 00:08:49.197407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:17597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.589 [2024-11-19 00:08:49.197464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:42.589 [2024-11-19 00:08:49.211959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be3d08 00:24:42.589 [2024-11-19 00:08:49.213718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:3636 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.589 [2024-11-19 00:08:49.213771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:42.589 [2024-11-19 00:08:49.228465] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be3498 00:24:42.589 [2024-11-19 00:08:49.230206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.589 [2024-11-19 00:08:49.230257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:42.589 [2024-11-19 00:08:49.244923] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be2c28 00:24:42.589 [2024-11-19 00:08:49.246509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:8177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.589 [2024-11-19 00:08:49.246570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:42.589 14802.00 IOPS, 57.82 MiB/s [2024-11-19T00:08:49.281Z] [2024-11-19 00:08:49.262153] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be23b8 00:24:42.589 [2024-11-19 00:08:49.263788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:16683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.589 [2024-11-19 00:08:49.263847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:42.848 [2024-11-19 00:08:49.279439] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be1b48 00:24:42.848 [2024-11-19 00:08:49.281443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:23885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.848 [2024-11-19 00:08:49.281501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:42.848 [2024-11-19 00:08:49.296742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be12d8 00:24:42.848 [2024-11-19 00:08:49.298295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:22531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.848 [2024-11-19 00:08:49.298347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:42.848 [2024-11-19 00:08:49.313437] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be0a68 00:24:42.848 [2024-11-19 00:08:49.315090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:19783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.848 [2024-11-19 00:08:49.315144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:42.848 [2024-11-19 00:08:49.329711] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be01f8 00:24:42.848 [2024-11-19 00:08:49.331209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:7284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.848 [2024-11-19 00:08:49.331261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:42.848 [2024-11-19 00:08:49.345793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bdf988 00:24:42.849 [2024-11-19 00:08:49.347317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:3910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.849 [2024-11-19 00:08:49.347377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:42.849 [2024-11-19 00:08:49.362429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bdf118 00:24:42.849 [2024-11-19 00:08:49.364006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:25380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.849 [2024-11-19 00:08:49.364080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:42.849 [2024-11-19 00:08:49.378801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bde8a8 00:24:42.849 [2024-11-19 00:08:49.380241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.849 [2024-11-19 00:08:49.380286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:42.849 [2024-11-19 00:08:49.395004] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bde038 00:24:42.849 [2024-11-19 00:08:49.396458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.849 [2024-11-19 00:08:49.396513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:42.849 [2024-11-19 00:08:49.417785] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bde038 00:24:42.849 [2024-11-19 00:08:49.420391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21935 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.849 [2024-11-19 00:08:49.420433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:42.849 [2024-11-19 00:08:49.434016] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bde8a8 00:24:42.849 [2024-11-19 00:08:49.436772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:22031 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.849 [2024-11-19 00:08:49.436825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:42.849 [2024-11-19 00:08:49.450344] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bdf118 00:24:42.849 [2024-11-19 00:08:49.453092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:19569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.849 [2024-11-19 00:08:49.453151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:42.849 [2024-11-19 00:08:49.466748] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bdf988 00:24:42.849 [2024-11-19 00:08:49.469360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:18201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.849 [2024-11-19 00:08:49.469418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:42.849 [2024-11-19 00:08:49.483088] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be01f8 00:24:42.849 [2024-11-19 00:08:49.485730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:13000 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.849 [2024-11-19 00:08:49.485782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:42.849 [2024-11-19 00:08:49.499338] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be0a68 00:24:42.849 [2024-11-19 00:08:49.501936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:10208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.849 [2024-11-19 00:08:49.501988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:42.849 [2024-11-19 00:08:49.515705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be12d8 00:24:42.849 [2024-11-19 00:08:49.518230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:17988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.849 [2024-11-19 00:08:49.518289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:42.849 [2024-11-19 00:08:49.531897] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be1b48 00:24:42.849 [2024-11-19 00:08:49.534828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:23596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.849 [2024-11-19 00:08:49.534891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:43.108 [2024-11-19 00:08:49.549507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be23b8 00:24:43.108 [2024-11-19 00:08:49.551974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:19327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.108 [2024-11-19 00:08:49.552031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:43.108 [2024-11-19 00:08:49.566047] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be2c28 00:24:43.108 [2024-11-19 00:08:49.568574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:15333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.108 [2024-11-19 00:08:49.568635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:43.108 [2024-11-19 00:08:49.582261] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be3498 00:24:43.108 [2024-11-19 00:08:49.584868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:25360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.108 [2024-11-19 00:08:49.584921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:43.108 [2024-11-19 00:08:49.598637] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be3d08 00:24:43.108 [2024-11-19 00:08:49.601115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:8073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.108 [2024-11-19 00:08:49.601174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:43.108 [2024-11-19 00:08:49.614911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be4578 00:24:43.108 [2024-11-19 00:08:49.617394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:14709 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.108 [2024-11-19 00:08:49.617451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:43.108 [2024-11-19 00:08:49.631174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be4de8 00:24:43.108 [2024-11-19 00:08:49.633700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:9112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.108 [2024-11-19 00:08:49.633754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:43.108 [2024-11-19 00:08:49.647408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be5658 00:24:43.108 [2024-11-19 00:08:49.649945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:17932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.108 [2024-11-19 00:08:49.649997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:43.108 [2024-11-19 00:08:49.663762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be5ec8 00:24:43.108 [2024-11-19 00:08:49.666192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:22276 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.108 [2024-11-19 00:08:49.666251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:43.108 [2024-11-19 00:08:49.680008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be6738 00:24:43.108 [2024-11-19 00:08:49.682538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:12537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.108 [2024-11-19 00:08:49.682598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:43.108 [2024-11-19 00:08:49.696408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be6fa8 00:24:43.108 [2024-11-19 00:08:49.698824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:164 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.108 [2024-11-19 00:08:49.698876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:43.108 [2024-11-19 00:08:49.712765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be7818 00:24:43.108 [2024-11-19 00:08:49.715093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:7199 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.108 [2024-11-19 00:08:49.715145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:43.108 [2024-11-19 00:08:49.729059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be8088 00:24:43.108 [2024-11-19 00:08:49.731430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:7072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.108 [2024-11-19 00:08:49.731492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:43.108 [2024-11-19 00:08:49.746614] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be88f8 00:24:43.109 [2024-11-19 00:08:49.749323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:11019 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.109 [2024-11-19 00:08:49.749382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:43.109 [2024-11-19 00:08:49.766168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be9168 00:24:43.109 [2024-11-19 00:08:49.768688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:2789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.109 [2024-11-19 00:08:49.768749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:43.109 [2024-11-19 00:08:49.783901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be99d8 00:24:43.109 [2024-11-19 00:08:49.786260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:5603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.109 [2024-11-19 00:08:49.786316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:43.368 [2024-11-19 00:08:49.801672] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bea248 00:24:43.368 [2024-11-19 00:08:49.803843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:16273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.368 [2024-11-19 00:08:49.803895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:43.368 [2024-11-19 00:08:49.818061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016beaab8 00:24:43.368 [2024-11-19 00:08:49.820244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:12122 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.368 [2024-11-19 00:08:49.820320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:43.368 [2024-11-19 00:08:49.834475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016beb328 00:24:43.368 [2024-11-19 00:08:49.836807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:23142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.368 [2024-11-19 00:08:49.836860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:43.368 [2024-11-19 00:08:49.850842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bebb98 00:24:43.368 [2024-11-19 00:08:49.853091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:2461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.368 [2024-11-19 00:08:49.853152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:43.368 [2024-11-19 00:08:49.867698] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec408 00:24:43.368 [2024-11-19 00:08:49.870085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:14394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.368 [2024-11-19 00:08:49.870143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:43.368 [2024-11-19 00:08:49.884816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016becc78 00:24:43.368 [2024-11-19 00:08:49.887026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1605 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.368 [2024-11-19 00:08:49.887080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:43.368 [2024-11-19 00:08:49.901590] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bed4e8 00:24:43.368 [2024-11-19 00:08:49.903650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17764 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.368 [2024-11-19 00:08:49.903702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:43.368 [2024-11-19 00:08:49.917693] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bedd58 00:24:43.368 [2024-11-19 00:08:49.919727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.368 [2024-11-19 00:08:49.919787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:43.368 [2024-11-19 00:08:49.934054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bee5c8 00:24:43.368 [2024-11-19 00:08:49.936141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.368 [2024-11-19 00:08:49.936198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:43.368 [2024-11-19 00:08:49.950373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016beee38 00:24:43.369 [2024-11-19 00:08:49.952549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1757 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.369 [2024-11-19 00:08:49.952605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:43.369 [2024-11-19 00:08:49.966664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bef6a8 00:24:43.369 [2024-11-19 00:08:49.968816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:5704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.369 [2024-11-19 00:08:49.968854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:43.369 [2024-11-19 00:08:49.983001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016beff18 00:24:43.369 [2024-11-19 00:08:49.985160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:19588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.369 [2024-11-19 00:08:49.985213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:43.369 [2024-11-19 00:08:49.999978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf0788 00:24:43.369 [2024-11-19 00:08:50.002245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:13618 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.369 [2024-11-19 00:08:50.002308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:43.369 [2024-11-19 00:08:50.019259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf0ff8 00:24:43.369 [2024-11-19 00:08:50.021554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:22752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.369 [2024-11-19 00:08:50.021653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:43.369 [2024-11-19 00:08:50.036953] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf1868 00:24:43.369 [2024-11-19 00:08:50.038911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:3504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.369 [2024-11-19 00:08:50.038970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:43.369 [2024-11-19 00:08:50.053818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf20d8 00:24:43.369 [2024-11-19 00:08:50.055901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:21167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.369 [2024-11-19 00:08:50.055963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:43.629 [2024-11-19 00:08:50.071149] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf2948 00:24:43.629 [2024-11-19 00:08:50.073211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:23096 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.629 [2024-11-19 00:08:50.073278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:43.629 [2024-11-19 00:08:50.087568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf31b8 00:24:43.629 [2024-11-19 00:08:50.089552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:14013 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.629 [2024-11-19 00:08:50.089605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:43.629 [2024-11-19 00:08:50.103860] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf3a28 00:24:43.629 [2024-11-19 00:08:50.105861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:5222 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.629 [2024-11-19 00:08:50.105922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:43.629 [2024-11-19 00:08:50.120519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf4298 00:24:43.629 [2024-11-19 00:08:50.122362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:6351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.629 [2024-11-19 00:08:50.122420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:43.629 [2024-11-19 00:08:50.137315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf4b08 00:24:43.629 [2024-11-19 00:08:50.139197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:24932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.629 [2024-11-19 00:08:50.139240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:43.629 [2024-11-19 00:08:50.153822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf5378 00:24:43.629 [2024-11-19 00:08:50.155618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:16277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.629 [2024-11-19 00:08:50.155695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:43.629 [2024-11-19 00:08:50.170361] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf5be8 00:24:43.629 [2024-11-19 00:08:50.172227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:22885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.629 [2024-11-19 00:08:50.172289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:43.629 [2024-11-19 00:08:50.187164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf6458 00:24:43.629 [2024-11-19 00:08:50.189196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:18332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.629 [2024-11-19 00:08:50.189257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:43.629 [2024-11-19 00:08:50.205429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf6cc8 00:24:43.629 [2024-11-19 00:08:50.207465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:7690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.629 [2024-11-19 00:08:50.207528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:43.629 [2024-11-19 00:08:50.223865] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf7538 00:24:43.629 [2024-11-19 00:08:50.225804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:7480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.629 [2024-11-19 00:08:50.225850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:43.629 [2024-11-19 00:08:50.241340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf7da8 00:24:43.629 [2024-11-19 00:08:50.243236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:9178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.629 [2024-11-19 00:08:50.243280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:43.629 14991.50 IOPS, 58.56 MiB/s [2024-11-19T00:08:50.321Z] [2024-11-19 00:08:50.260216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf8618 00:24:43.629 [2024-11-19 00:08:50.262059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:16765 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.629 [2024-11-19 00:08:50.262112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:43.629 00:24:43.629 Latency(us) 00:24:43.629 [2024-11-19T00:08:50.321Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:43.629 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:43.629 nvme0n1 : 2.01 14984.44 58.53 0.00 0.00 8534.22 7685.59 31218.97 00:24:43.629 [2024-11-19T00:08:50.321Z] =================================================================================================================== 00:24:43.629 [2024-11-19T00:08:50.321Z] Total : 14984.44 58.53 0.00 0.00 8534.22 7685.59 31218.97 00:24:43.629 { 00:24:43.629 "results": [ 00:24:43.629 { 00:24:43.629 "job": "nvme0n1", 00:24:43.629 "core_mask": "0x2", 00:24:43.629 "workload": "randwrite", 00:24:43.629 "status": "finished", 00:24:43.629 "queue_depth": 128, 00:24:43.629 "io_size": 4096, 00:24:43.629 "runtime": 2.009485, 00:24:43.629 "iops": 14984.436310796049, 00:24:43.629 "mibps": 58.532954339047066, 00:24:43.629 "io_failed": 0, 00:24:43.629 "io_timeout": 0, 00:24:43.629 "avg_latency_us": 8534.221856705946, 00:24:43.629 "min_latency_us": 7685.585454545455, 00:24:43.630 "max_latency_us": 31218.967272727274 00:24:43.630 } 00:24:43.630 ], 00:24:43.630 "core_count": 1 00:24:43.630 } 00:24:43.630 00:08:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:43.630 00:08:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:43.630 00:08:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:43.630 | .driver_specific 00:24:43.630 | .nvme_error 00:24:43.630 | .status_code 00:24:43.630 | .command_transient_transport_error' 00:24:43.630 00:08:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:43.890 00:08:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 118 > 0 )) 00:24:43.890 00:08:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 86502 00:24:43.890 00:08:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 86502 ']' 00:24:43.890 00:08:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 86502 00:24:43.890 00:08:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:24:43.890 00:08:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:43.890 00:08:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86502 00:24:44.150 00:08:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:44.150 killing process with pid 86502 00:24:44.150 00:08:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:44.150 00:08:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86502' 00:24:44.150 Received shutdown signal, test time was about 2.000000 seconds 00:24:44.150 00:24:44.150 Latency(us) 00:24:44.150 [2024-11-19T00:08:50.842Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:44.150 [2024-11-19T00:08:50.842Z] =================================================================================================================== 00:24:44.150 [2024-11-19T00:08:50.842Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:44.150 00:08:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 86502 00:24:44.150 00:08:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 86502 00:24:44.720 00:08:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:24:44.720 00:08:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:44.720 00:08:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:24:44.720 00:08:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:24:44.720 00:08:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:24:44.720 00:08:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=86565 00:24:44.720 00:08:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:24:44.720 00:08:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 86565 /var/tmp/bperf.sock 00:24:44.720 00:08:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 86565 ']' 00:24:44.720 00:08:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:44.720 00:08:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:44.720 00:08:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:44.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:44.720 00:08:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:44.720 00:08:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:44.720 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:44.720 Zero copy mechanism will not be used. 00:24:44.720 [2024-11-19 00:08:51.406029] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:24:44.720 [2024-11-19 00:08:51.406193] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86565 ] 00:24:44.979 [2024-11-19 00:08:51.581752] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:45.239 [2024-11-19 00:08:51.669025] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:45.239 [2024-11-19 00:08:51.819485] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:45.808 00:08:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:45.808 00:08:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:24:45.808 00:08:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:45.808 00:08:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:46.067 00:08:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:46.067 00:08:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.067 00:08:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:46.067 00:08:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.067 00:08:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:46.067 00:08:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:46.327 nvme0n1 00:24:46.327 00:08:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:24:46.327 00:08:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.327 00:08:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:46.327 00:08:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.328 00:08:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:46.328 00:08:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:46.328 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:46.328 Zero copy mechanism will not be used. 00:24:46.328 Running I/O for 2 seconds... 00:24:46.328 [2024-11-19 00:08:52.977134] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:46.328 [2024-11-19 00:08:52.977256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.328 [2024-11-19 00:08:52.977297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:46.328 [2024-11-19 00:08:52.983106] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:46.328 [2024-11-19 00:08:52.983217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.328 [2024-11-19 00:08:52.983248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:46.328 [2024-11-19 00:08:52.989098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:46.328 [2024-11-19 00:08:52.989219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.328 [2024-11-19 00:08:52.989248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:46.328 [2024-11-19 00:08:52.994763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:46.328 [2024-11-19 00:08:52.994872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.328 [2024-11-19 00:08:52.994908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:46.328 [2024-11-19 00:08:53.000652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:46.328 [2024-11-19 00:08:53.000960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.328 [2024-11-19 00:08:53.000991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:46.328 [2024-11-19 00:08:53.006562] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:46.328 [2024-11-19 00:08:53.006724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.328 [2024-11-19 00:08:53.006754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:46.328 [2024-11-19 00:08:53.012822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:46.328 [2024-11-19 00:08:53.012962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.328 [2024-11-19 00:08:53.013014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:46.589 [2024-11-19 00:08:53.019265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:46.589 [2024-11-19 00:08:53.019391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.589 [2024-11-19 00:08:53.019443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:46.589 [2024-11-19 00:08:53.025240] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:46.589 [2024-11-19 00:08:53.025371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.589 [2024-11-19 00:08:53.025399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:46.589 [2024-11-19 00:08:53.031100] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:46.589 [2024-11-19 00:08:53.031197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.589 [2024-11-19 00:08:53.031231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:46.589 [2024-11-19 00:08:53.036947] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:46.589 [2024-11-19 00:08:53.037069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.589 [2024-11-19 00:08:53.037105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:46.589 [2024-11-19 00:08:53.042582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:46.589 [2024-11-19 00:08:53.042712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.589 [2024-11-19 00:08:53.042741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:46.589 [2024-11-19 00:08:53.048238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:46.589 [2024-11-19 00:08:53.048539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.589 [2024-11-19 00:08:53.048571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:46.589 [2024-11-19 00:08:53.054184] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:46.589 [2024-11-19 00:08:53.054299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.589 [2024-11-19 00:08:53.054334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:46.589 [2024-11-19 00:08:53.059893] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:46.589 [2024-11-19 00:08:53.060004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.589 [2024-11-19 00:08:53.060054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:46.589 [2024-11-19 00:08:53.065704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:46.589 [2024-11-19 00:08:53.065830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.589 [2024-11-19 00:08:53.065858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:46.589 [2024-11-19 00:08:53.071361] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:46.589 [2024-11-19 00:08:53.071486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.589 [2024-11-19 00:08:53.071519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:46.589 [2024-11-19 00:08:53.077420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:46.589 [2024-11-19 00:08:53.077513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.589 [2024-11-19 00:08:53.077548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:46.589 [2024-11-19 00:08:53.083169] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:46.589 [2024-11-19 00:08:53.083289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.589 [2024-11-19 00:08:53.083317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:46.589 [2024-11-19 00:08:53.089090] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:46.589 [2024-11-19 00:08:53.089211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.589 [2024-11-19 00:08:53.089240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:46.590 [2024-11-19 00:08:53.094676] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:46.590 [2024-11-19 00:08:53.094799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.590 [2024-11-19 00:08:53.094837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:46.590 [2024-11-19 00:08:53.100385] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:46.590 [2024-11-19 00:08:53.100736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.590 [2024-11-19 00:08:53.100766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:46.590 [2024-11-19 00:08:53.106359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:46.590 [2024-11-19 00:08:53.106471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.590 [2024-11-19 00:08:53.106499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:46.590 [2024-11-19 00:08:53.112117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:46.590 [2024-11-19 00:08:53.112416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.590 [2024-11-19 00:08:53.112456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:46.590 [2024-11-19 00:08:53.118340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:46.590 [2024-11-19 00:08:53.118439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.590 [2024-11-19 00:08:53.118476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:46.590 [2024-11-19 00:08:53.124125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:46.590 [2024-11-19 00:08:53.124409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.590 [2024-11-19 00:08:53.124439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:46.590 [2024-11-19 00:08:53.130251] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:46.590 [2024-11-19 00:08:53.130383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.590 [2024-11-19 00:08:53.130433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:46.590 [2024-11-19 00:08:53.136332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:46.590 [2024-11-19 00:08:53.136590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.590 [2024-11-19 00:08:53.136663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:46.590 [2024-11-19 00:08:53.142328] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:46.590 [2024-11-19 00:08:53.142432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.590 [2024-11-19 00:08:53.142460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:46.590 [2024-11-19 00:08:53.148082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:46.590 [2024-11-19 00:08:53.148371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.590 [2024-11-19 00:08:53.148404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:46.590 [2024-11-19 00:08:53.154166] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:46.590 [2024-11-19 00:08:53.154276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.590 [2024-11-19 00:08:53.154312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:46.590 [2024-11-19 00:08:53.159948] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:46.590 [2024-11-19 00:08:53.160087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.590 [2024-11-19 00:08:53.160123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:46.590 [2024-11-19 00:08:53.165809] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:46.590 [2024-11-19 00:08:53.165932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.590 [2024-11-19 00:08:53.165959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:46.590 [2024-11-19 00:08:53.171464] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:46.590 [2024-11-19 00:08:53.171763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.590 [2024-11-19 00:08:53.171803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:46.590 [2024-11-19 00:08:53.177674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:46.590 [2024-11-19 00:08:53.177788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.590 [2024-11-19 00:08:53.177824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:46.590 [2024-11-19 00:08:53.183364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:46.590 [2024-11-19 00:08:53.183690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.590 [2024-11-19 00:08:53.183721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:46.590 [2024-11-19 00:08:53.189444] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:46.590 [2024-11-19 00:08:53.189565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.590 [2024-11-19 00:08:53.189592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:46.590 [2024-11-19 00:08:53.195237] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:46.590 [2024-11-19 00:08:53.195473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.590 [2024-11-19 00:08:53.195510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:46.590 [2024-11-19 00:08:53.201433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:46.590 [2024-11-19 00:08:53.201547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.590 [2024-11-19 00:08:53.201575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:46.590 [2024-11-19 00:08:53.207059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:46.590 [2024-11-19 00:08:53.207317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.590 [2024-11-19 00:08:53.207346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:46.590 [2024-11-19 00:08:53.213091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:46.590 [2024-11-19 00:08:53.213215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.590 [2024-11-19 00:08:53.213249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:46.590 [2024-11-19 00:08:53.218771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:46.590 [2024-11-19 00:08:53.218875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.590 [2024-11-19 00:08:53.218911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:46.590 [2024-11-19 00:08:53.224420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:46.590 [2024-11-19 00:08:53.224560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.590 [2024-11-19 00:08:53.224590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:46.590 [2024-11-19 00:08:53.230156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:46.590 [2024-11-19 00:08:53.230396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.590 [2024-11-19 00:08:53.230429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:46.590 [2024-11-19 00:08:53.236131] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:46.590 [2024-11-19 00:08:53.236236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.590 [2024-11-19 00:08:53.236272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:46.590 [2024-11-19 00:08:53.241790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:46.590 [2024-11-19 00:08:53.241908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.590 [2024-11-19 00:08:53.241935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:46.590 [2024-11-19 00:08:53.247446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:46.590 [2024-11-19 00:08:53.247563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.591 [2024-11-19 00:08:53.247592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:46.591 [2024-11-19 00:08:53.253331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:46.591 [2024-11-19 00:08:53.253581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.591 [2024-11-19 00:08:53.253634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:46.591 [2024-11-19 00:08:53.259177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:46.591 [2024-11-19 00:08:53.259272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.591 [2024-11-19 00:08:53.259310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:46.591 [2024-11-19 00:08:53.264976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:46.591 [2024-11-19 00:08:53.265093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.591 [2024-11-19 00:08:53.265121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:46.591 [2024-11-19 00:08:53.270676] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:46.591 [2024-11-19 00:08:53.270788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.591 [2024-11-19 00:08:53.270822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:46.851 [2024-11-19 00:08:53.277077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:46.851 [2024-11-19 00:08:53.277334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.851 [2024-11-19 00:08:53.277373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:46.851 [2024-11-19 00:08:53.283555] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:46.851 [2024-11-19 00:08:53.283722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.851 [2024-11-19 00:08:53.283751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:46.851 [2024-11-19 00:08:53.289395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:46.851 [2024-11-19 00:08:53.289509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.851 [2024-11-19 00:08:53.289537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:46.851 [2024-11-19 00:08:53.295205] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:46.851 [2024-11-19 00:08:53.295322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.851 [2024-11-19 00:08:53.295360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:46.852 [2024-11-19 00:08:53.301194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:46.852 [2024-11-19 00:08:53.301325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.852 [2024-11-19 00:08:53.301352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:46.852 [2024-11-19 00:08:53.306946] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:46.852 [2024-11-19 00:08:53.307059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.852 [2024-11-19 00:08:53.307087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:46.852 [2024-11-19 00:08:53.312598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:46.852 [2024-11-19 00:08:53.312907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.852 [2024-11-19 00:08:53.312944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:46.852 [2024-11-19 00:08:53.318728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:46.852 [2024-11-19 00:08:53.318821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.852 [2024-11-19 00:08:53.318860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:46.852 [2024-11-19 00:08:53.324537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:46.852 [2024-11-19 00:08:53.324869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.852 [2024-11-19 00:08:53.324898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:46.852 [2024-11-19 00:08:53.330612] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:46.852 [2024-11-19 00:08:53.330709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.852 [2024-11-19 00:08:53.330740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:46.852 [2024-11-19 00:08:53.336329] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:46.852 [2024-11-19 00:08:53.336587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.852 [2024-11-19 00:08:53.336657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:46.852 [2024-11-19 00:08:53.342418] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:46.852 [2024-11-19 00:08:53.342538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.852 [2024-11-19 00:08:53.342566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:46.852 [2024-11-19 00:08:53.348009] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:46.852 [2024-11-19 00:08:53.348258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.852 [2024-11-19 00:08:53.348312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:46.852 [2024-11-19 00:08:53.354049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:46.852 [2024-11-19 00:08:53.354158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.852 [2024-11-19 00:08:53.354198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:46.852 [2024-11-19 00:08:53.359640] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:46.852 [2024-11-19 00:08:53.359745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.852 [2024-11-19 00:08:53.359780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:46.852 [2024-11-19 00:08:53.365402] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:46.852 [2024-11-19 00:08:53.365520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.852 [2024-11-19 00:08:53.365547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:46.852 [2024-11-19 00:08:53.371334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:46.852 [2024-11-19 00:08:53.371599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.852 [2024-11-19 00:08:53.371650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:46.852 [2024-11-19 00:08:53.377363] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:46.852 [2024-11-19 00:08:53.377470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.852 [2024-11-19 00:08:53.377506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:46.852 [2024-11-19 00:08:53.383082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:46.852 [2024-11-19 00:08:53.383322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.852 [2024-11-19 00:08:53.383351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:46.852 [2024-11-19 00:08:53.389074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:46.852 [2024-11-19 00:08:53.389193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.852 [2024-11-19 00:08:53.389220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:46.852 [2024-11-19 00:08:53.394686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:46.852 [2024-11-19 00:08:53.394795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.852 [2024-11-19 00:08:53.394830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:46.852 [2024-11-19 00:08:53.400335] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:46.852 [2024-11-19 00:08:53.400433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.852 [2024-11-19 00:08:53.400472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:46.852 [2024-11-19 00:08:53.406113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:46.852 [2024-11-19 00:08:53.406365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.852 [2024-11-19 00:08:53.406394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:46.852 [2024-11-19 00:08:53.412033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:46.852 [2024-11-19 00:08:53.412143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.852 [2024-11-19 00:08:53.412179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:46.852 [2024-11-19 00:08:53.417769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:46.852 [2024-11-19 00:08:53.417880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.852 [2024-11-19 00:08:53.417915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:46.852 [2024-11-19 00:08:53.423379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:46.852 [2024-11-19 00:08:53.423496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.852 [2024-11-19 00:08:53.423524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:46.852 [2024-11-19 00:08:53.429127] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:46.852 [2024-11-19 00:08:53.429376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.852 [2024-11-19 00:08:53.429405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:46.852 [2024-11-19 00:08:53.435053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:46.852 [2024-11-19 00:08:53.435164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.852 [2024-11-19 00:08:53.435200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:46.852 [2024-11-19 00:08:53.440797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:46.852 [2024-11-19 00:08:53.440905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.852 [2024-11-19 00:08:53.440940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:46.852 [2024-11-19 00:08:53.446470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:46.852 [2024-11-19 00:08:53.446585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.852 [2024-11-19 00:08:53.446626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:46.852 [2024-11-19 00:08:53.452147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:46.852 [2024-11-19 00:08:53.452439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.853 [2024-11-19 00:08:53.452477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:46.853 [2024-11-19 00:08:53.458076] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:46.853 [2024-11-19 00:08:53.458200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.853 [2024-11-19 00:08:53.458235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:46.853 [2024-11-19 00:08:53.463741] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:46.853 [2024-11-19 00:08:53.463860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.853 [2024-11-19 00:08:53.463888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:46.853 [2024-11-19 00:08:53.469521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:46.853 [2024-11-19 00:08:53.469652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.853 [2024-11-19 00:08:53.469695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:46.853 [2024-11-19 00:08:53.475093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:46.853 [2024-11-19 00:08:53.475330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.853 [2024-11-19 00:08:53.475366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:46.853 [2024-11-19 00:08:53.480999] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:46.853 [2024-11-19 00:08:53.481119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.853 [2024-11-19 00:08:53.481147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:46.853 [2024-11-19 00:08:53.486662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:46.853 [2024-11-19 00:08:53.486794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.853 [2024-11-19 00:08:53.486821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:46.853 [2024-11-19 00:08:53.492315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:46.853 [2024-11-19 00:08:53.492434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.853 [2024-11-19 00:08:53.492470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:46.853 [2024-11-19 00:08:53.498195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:46.853 [2024-11-19 00:08:53.498452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.853 [2024-11-19 00:08:53.498490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:46.853 [2024-11-19 00:08:53.504027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:46.853 [2024-11-19 00:08:53.504328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.853 [2024-11-19 00:08:53.504363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:46.853 [2024-11-19 00:08:53.510220] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:46.853 [2024-11-19 00:08:53.510331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.853 [2024-11-19 00:08:53.510358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:46.853 [2024-11-19 00:08:53.516056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:46.853 [2024-11-19 00:08:53.516135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.853 [2024-11-19 00:08:53.516171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:46.853 [2024-11-19 00:08:53.521793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:46.853 [2024-11-19 00:08:53.521893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.853 [2024-11-19 00:08:53.521920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:46.853 [2024-11-19 00:08:53.527490] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:46.853 [2024-11-19 00:08:53.527600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.853 [2024-11-19 00:08:53.527656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:46.853 [2024-11-19 00:08:53.533335] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:46.853 [2024-11-19 00:08:53.533440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.853 [2024-11-19 00:08:53.533476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:47.114 [2024-11-19 00:08:53.539763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.114 [2024-11-19 00:08:53.539848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.114 [2024-11-19 00:08:53.539901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:47.114 [2024-11-19 00:08:53.545980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.114 [2024-11-19 00:08:53.546085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.114 [2024-11-19 00:08:53.546112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:47.114 [2024-11-19 00:08:53.551695] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.114 [2024-11-19 00:08:53.551798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.114 [2024-11-19 00:08:53.551833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:47.114 [2024-11-19 00:08:53.557574] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.114 [2024-11-19 00:08:53.557677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.114 [2024-11-19 00:08:53.557712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:47.114 [2024-11-19 00:08:53.563286] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.114 [2024-11-19 00:08:53.563398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.114 [2024-11-19 00:08:53.563425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:47.114 [2024-11-19 00:08:53.569179] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.114 [2024-11-19 00:08:53.569284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.114 [2024-11-19 00:08:53.569312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:47.114 [2024-11-19 00:08:53.574976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.114 [2024-11-19 00:08:53.575084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.114 [2024-11-19 00:08:53.575120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:47.114 [2024-11-19 00:08:53.580593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.114 [2024-11-19 00:08:53.580724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.114 [2024-11-19 00:08:53.580760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:47.114 [2024-11-19 00:08:53.586543] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.114 [2024-11-19 00:08:53.586686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.114 [2024-11-19 00:08:53.586729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:47.114 [2024-11-19 00:08:53.592511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.114 [2024-11-19 00:08:53.592675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.114 [2024-11-19 00:08:53.592722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:47.114 [2024-11-19 00:08:53.598379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.114 [2024-11-19 00:08:53.598477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.114 [2024-11-19 00:08:53.598512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:47.114 [2024-11-19 00:08:53.604083] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.114 [2024-11-19 00:08:53.604188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.114 [2024-11-19 00:08:53.604215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:47.114 [2024-11-19 00:08:53.609972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.114 [2024-11-19 00:08:53.610087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.114 [2024-11-19 00:08:53.610115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:47.114 [2024-11-19 00:08:53.615623] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.114 [2024-11-19 00:08:53.615715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.114 [2024-11-19 00:08:53.615750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:47.114 [2024-11-19 00:08:53.621343] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.114 [2024-11-19 00:08:53.621443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.114 [2024-11-19 00:08:53.621479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:47.114 [2024-11-19 00:08:53.627025] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.114 [2024-11-19 00:08:53.627139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.115 [2024-11-19 00:08:53.627167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:47.115 [2024-11-19 00:08:53.632842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.115 [2024-11-19 00:08:53.632944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.115 [2024-11-19 00:08:53.632994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:47.115 [2024-11-19 00:08:53.638444] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.115 [2024-11-19 00:08:53.638552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.115 [2024-11-19 00:08:53.638589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:47.115 [2024-11-19 00:08:53.644189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.115 [2024-11-19 00:08:53.644345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.115 [2024-11-19 00:08:53.644374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:47.115 [2024-11-19 00:08:53.649834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.115 [2024-11-19 00:08:53.649948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.115 [2024-11-19 00:08:53.649977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:47.115 [2024-11-19 00:08:53.655563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.115 [2024-11-19 00:08:53.655674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.115 [2024-11-19 00:08:53.655713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:47.115 [2024-11-19 00:08:53.661195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.115 [2024-11-19 00:08:53.661301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.115 [2024-11-19 00:08:53.661336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:47.115 [2024-11-19 00:08:53.666817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.115 [2024-11-19 00:08:53.666915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.115 [2024-11-19 00:08:53.666942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:47.115 [2024-11-19 00:08:53.672550] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.115 [2024-11-19 00:08:53.672699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.115 [2024-11-19 00:08:53.672734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:47.115 [2024-11-19 00:08:53.678331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.115 [2024-11-19 00:08:53.678434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.115 [2024-11-19 00:08:53.678471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:47.115 [2024-11-19 00:08:53.684011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.115 [2024-11-19 00:08:53.684126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.115 [2024-11-19 00:08:53.684154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:47.115 [2024-11-19 00:08:53.689970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.115 [2024-11-19 00:08:53.690068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.115 [2024-11-19 00:08:53.690096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:47.115 [2024-11-19 00:08:53.695630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.115 [2024-11-19 00:08:53.695740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.115 [2024-11-19 00:08:53.695775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:47.115 [2024-11-19 00:08:53.701439] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.115 [2024-11-19 00:08:53.701537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.115 [2024-11-19 00:08:53.701572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:47.115 [2024-11-19 00:08:53.707056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.115 [2024-11-19 00:08:53.707168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.115 [2024-11-19 00:08:53.707196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:47.115 [2024-11-19 00:08:53.712839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.115 [2024-11-19 00:08:53.712949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.115 [2024-11-19 00:08:53.712988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:47.115 [2024-11-19 00:08:53.718572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.115 [2024-11-19 00:08:53.718695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.115 [2024-11-19 00:08:53.718730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:47.115 [2024-11-19 00:08:53.724423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.115 [2024-11-19 00:08:53.724540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.115 [2024-11-19 00:08:53.724569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:47.115 [2024-11-19 00:08:53.730142] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.115 [2024-11-19 00:08:53.730247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.115 [2024-11-19 00:08:53.730273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:47.115 [2024-11-19 00:08:53.735924] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.115 [2024-11-19 00:08:53.736015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.115 [2024-11-19 00:08:53.736050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:47.115 [2024-11-19 00:08:53.741651] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.115 [2024-11-19 00:08:53.741746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.115 [2024-11-19 00:08:53.741782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:47.115 [2024-11-19 00:08:53.747429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.115 [2024-11-19 00:08:53.747539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.115 [2024-11-19 00:08:53.747566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:47.115 [2024-11-19 00:08:53.753206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.115 [2024-11-19 00:08:53.753306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.115 [2024-11-19 00:08:53.753342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:47.115 [2024-11-19 00:08:53.759000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.115 [2024-11-19 00:08:53.759117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.115 [2024-11-19 00:08:53.759154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:47.115 [2024-11-19 00:08:53.764691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.115 [2024-11-19 00:08:53.764806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.115 [2024-11-19 00:08:53.764833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:47.115 [2024-11-19 00:08:53.770468] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.116 [2024-11-19 00:08:53.770577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.116 [2024-11-19 00:08:53.770620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:47.116 [2024-11-19 00:08:53.776117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.116 [2024-11-19 00:08:53.776225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.116 [2024-11-19 00:08:53.776260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:47.116 [2024-11-19 00:08:53.782013] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.116 [2024-11-19 00:08:53.782127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.116 [2024-11-19 00:08:53.782170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:47.116 [2024-11-19 00:08:53.787888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.116 [2024-11-19 00:08:53.787988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.116 [2024-11-19 00:08:53.788016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:47.116 [2024-11-19 00:08:53.793844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.116 [2024-11-19 00:08:53.793933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.116 [2024-11-19 00:08:53.793969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:47.116 [2024-11-19 00:08:53.800021] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.116 [2024-11-19 00:08:53.800143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.116 [2024-11-19 00:08:53.800178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:47.377 [2024-11-19 00:08:53.806436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.377 [2024-11-19 00:08:53.806530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.377 [2024-11-19 00:08:53.806558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:47.377 [2024-11-19 00:08:53.812499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.377 [2024-11-19 00:08:53.812631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.377 [2024-11-19 00:08:53.812710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:47.377 [2024-11-19 00:08:53.818440] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.377 [2024-11-19 00:08:53.818533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.377 [2024-11-19 00:08:53.818568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:47.377 [2024-11-19 00:08:53.824372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.377 [2024-11-19 00:08:53.824492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.377 [2024-11-19 00:08:53.824523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:47.377 [2024-11-19 00:08:53.830695] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.377 [2024-11-19 00:08:53.830828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.377 [2024-11-19 00:08:53.830871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:47.377 [2024-11-19 00:08:53.837375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.377 [2024-11-19 00:08:53.837465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.377 [2024-11-19 00:08:53.837504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:47.377 [2024-11-19 00:08:53.844111] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.377 [2024-11-19 00:08:53.844221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.377 [2024-11-19 00:08:53.844249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:47.377 [2024-11-19 00:08:53.850422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.377 [2024-11-19 00:08:53.850531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.377 [2024-11-19 00:08:53.850557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:47.377 [2024-11-19 00:08:53.856977] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.377 [2024-11-19 00:08:53.857090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.377 [2024-11-19 00:08:53.857125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:47.377 [2024-11-19 00:08:53.862933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.377 [2024-11-19 00:08:53.863065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.377 [2024-11-19 00:08:53.863092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:47.377 [2024-11-19 00:08:53.869139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.377 [2024-11-19 00:08:53.869245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.377 [2024-11-19 00:08:53.869272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:47.377 [2024-11-19 00:08:53.874801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.377 [2024-11-19 00:08:53.874905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.377 [2024-11-19 00:08:53.874939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:47.377 [2024-11-19 00:08:53.880741] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.377 [2024-11-19 00:08:53.880845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.377 [2024-11-19 00:08:53.880897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:47.377 [2024-11-19 00:08:53.886833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.377 [2024-11-19 00:08:53.886932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.377 [2024-11-19 00:08:53.886959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:47.377 [2024-11-19 00:08:53.892585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.377 [2024-11-19 00:08:53.892732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.377 [2024-11-19 00:08:53.892766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:47.377 [2024-11-19 00:08:53.898329] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.377 [2024-11-19 00:08:53.898438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.377 [2024-11-19 00:08:53.898475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:47.377 [2024-11-19 00:08:53.904083] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.378 [2024-11-19 00:08:53.904201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.378 [2024-11-19 00:08:53.904229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:47.378 [2024-11-19 00:08:53.909805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.378 [2024-11-19 00:08:53.909902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.378 [2024-11-19 00:08:53.909929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:47.378 [2024-11-19 00:08:53.915565] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.378 [2024-11-19 00:08:53.915675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.378 [2024-11-19 00:08:53.915711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:47.378 [2024-11-19 00:08:53.921345] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.378 [2024-11-19 00:08:53.921459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.378 [2024-11-19 00:08:53.921495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:47.378 [2024-11-19 00:08:53.927139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.378 [2024-11-19 00:08:53.927255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.378 [2024-11-19 00:08:53.927283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:47.378 [2024-11-19 00:08:53.932899] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.378 [2024-11-19 00:08:53.933023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.378 [2024-11-19 00:08:53.933060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:47.378 [2024-11-19 00:08:53.938985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.378 [2024-11-19 00:08:53.939084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.378 [2024-11-19 00:08:53.939121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:47.378 [2024-11-19 00:08:53.944883] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.378 [2024-11-19 00:08:53.944993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.378 [2024-11-19 00:08:53.945020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:47.378 [2024-11-19 00:08:53.951268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.378 [2024-11-19 00:08:53.951386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.378 [2024-11-19 00:08:53.951414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:47.378 [2024-11-19 00:08:53.957568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.378 [2024-11-19 00:08:53.957709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.378 [2024-11-19 00:08:53.957761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:47.378 [2024-11-19 00:08:53.963907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.378 [2024-11-19 00:08:53.964055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.378 [2024-11-19 00:08:53.964083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:47.378 [2024-11-19 00:08:53.970225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.378 5241.00 IOPS, 655.12 MiB/s [2024-11-19T00:08:54.070Z] [2024-11-19 00:08:53.971881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.378 [2024-11-19 00:08:53.971925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:47.378 [2024-11-19 00:08:53.977467] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.378 [2024-11-19 00:08:53.977577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.378 [2024-11-19 00:08:53.977605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:47.378 [2024-11-19 00:08:53.983646] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.378 [2024-11-19 00:08:53.983751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.378 [2024-11-19 00:08:53.983778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:47.378 [2024-11-19 00:08:53.989697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.378 [2024-11-19 00:08:53.989789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.378 [2024-11-19 00:08:53.989818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:47.378 [2024-11-19 00:08:53.995779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.378 [2024-11-19 00:08:53.995906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.378 [2024-11-19 00:08:53.995935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:47.378 [2024-11-19 00:08:54.001867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.378 [2024-11-19 00:08:54.001975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.378 [2024-11-19 00:08:54.002004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:47.378 [2024-11-19 00:08:54.007691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.378 [2024-11-19 00:08:54.007804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.378 [2024-11-19 00:08:54.007832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:47.378 [2024-11-19 00:08:54.013692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.378 [2024-11-19 00:08:54.013792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.378 [2024-11-19 00:08:54.013820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:47.378 [2024-11-19 00:08:54.019752] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.378 [2024-11-19 00:08:54.019845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.378 [2024-11-19 00:08:54.019873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:47.378 [2024-11-19 00:08:54.025725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.378 [2024-11-19 00:08:54.025836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.378 [2024-11-19 00:08:54.025864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:47.378 [2024-11-19 00:08:54.031653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.378 [2024-11-19 00:08:54.031759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.378 [2024-11-19 00:08:54.031787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:47.378 [2024-11-19 00:08:54.037520] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.378 [2024-11-19 00:08:54.037646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.378 [2024-11-19 00:08:54.037676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:47.379 [2024-11-19 00:08:54.043666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.379 [2024-11-19 00:08:54.043769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.379 [2024-11-19 00:08:54.043798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:47.379 [2024-11-19 00:08:54.049473] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.379 [2024-11-19 00:08:54.049574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.379 [2024-11-19 00:08:54.049602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:47.379 [2024-11-19 00:08:54.055285] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.379 [2024-11-19 00:08:54.055388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.379 [2024-11-19 00:08:54.055416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:47.379 [2024-11-19 00:08:54.061347] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.379 [2024-11-19 00:08:54.061452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.379 [2024-11-19 00:08:54.061480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:47.643 [2024-11-19 00:08:54.067885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.643 [2024-11-19 00:08:54.067988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.643 [2024-11-19 00:08:54.068016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:47.643 [2024-11-19 00:08:54.074306] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.643 [2024-11-19 00:08:54.074415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.643 [2024-11-19 00:08:54.074443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:47.643 [2024-11-19 00:08:54.080166] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.643 [2024-11-19 00:08:54.080272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.643 [2024-11-19 00:08:54.080342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:47.643 [2024-11-19 00:08:54.086131] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.643 [2024-11-19 00:08:54.086242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.643 [2024-11-19 00:08:54.086270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:47.643 [2024-11-19 00:08:54.092102] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.643 [2024-11-19 00:08:54.092209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.643 [2024-11-19 00:08:54.092252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:47.643 [2024-11-19 00:08:54.098074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.643 [2024-11-19 00:08:54.098178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.643 [2024-11-19 00:08:54.098205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:47.643 [2024-11-19 00:08:54.103928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.643 [2024-11-19 00:08:54.104030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.643 [2024-11-19 00:08:54.104058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:47.643 [2024-11-19 00:08:54.109807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.643 [2024-11-19 00:08:54.109916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.643 [2024-11-19 00:08:54.109944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:47.643 [2024-11-19 00:08:54.115891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.643 [2024-11-19 00:08:54.116002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.643 [2024-11-19 00:08:54.116031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:47.643 [2024-11-19 00:08:54.121817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.643 [2024-11-19 00:08:54.121923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.643 [2024-11-19 00:08:54.121951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:47.644 [2024-11-19 00:08:54.127817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.644 [2024-11-19 00:08:54.127913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.644 [2024-11-19 00:08:54.127942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:47.644 [2024-11-19 00:08:54.134074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.644 [2024-11-19 00:08:54.134180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.644 [2024-11-19 00:08:54.134208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:47.644 [2024-11-19 00:08:54.139950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.644 [2024-11-19 00:08:54.140059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.644 [2024-11-19 00:08:54.140087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:47.644 [2024-11-19 00:08:54.145867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.644 [2024-11-19 00:08:54.145970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.644 [2024-11-19 00:08:54.145999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:47.644 [2024-11-19 00:08:54.151819] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.644 [2024-11-19 00:08:54.151918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.644 [2024-11-19 00:08:54.151946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:47.644 [2024-11-19 00:08:54.157926] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.644 [2024-11-19 00:08:54.158051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.644 [2024-11-19 00:08:54.158078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:47.644 [2024-11-19 00:08:54.163788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.644 [2024-11-19 00:08:54.163897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.644 [2024-11-19 00:08:54.163925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:47.644 [2024-11-19 00:08:54.169714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.644 [2024-11-19 00:08:54.169823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.644 [2024-11-19 00:08:54.169850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:47.644 [2024-11-19 00:08:54.175875] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.644 [2024-11-19 00:08:54.175977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.644 [2024-11-19 00:08:54.176006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:47.644 [2024-11-19 00:08:54.181780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.644 [2024-11-19 00:08:54.181891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.644 [2024-11-19 00:08:54.181918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:47.644 [2024-11-19 00:08:54.187718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.644 [2024-11-19 00:08:54.187829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.644 [2024-11-19 00:08:54.187858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:47.644 [2024-11-19 00:08:54.193824] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.644 [2024-11-19 00:08:54.193908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.644 [2024-11-19 00:08:54.193937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:47.644 [2024-11-19 00:08:54.199820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.644 [2024-11-19 00:08:54.199943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.644 [2024-11-19 00:08:54.199970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:47.644 [2024-11-19 00:08:54.205844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.644 [2024-11-19 00:08:54.205937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.644 [2024-11-19 00:08:54.205965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:47.644 [2024-11-19 00:08:54.211717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.644 [2024-11-19 00:08:54.211818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.644 [2024-11-19 00:08:54.211847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:47.644 [2024-11-19 00:08:54.217407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.644 [2024-11-19 00:08:54.217510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.644 [2024-11-19 00:08:54.217537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:47.645 [2024-11-19 00:08:54.223144] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.645 [2024-11-19 00:08:54.223244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.645 [2024-11-19 00:08:54.223272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:47.645 [2024-11-19 00:08:54.228872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.645 [2024-11-19 00:08:54.228978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.645 [2024-11-19 00:08:54.229005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:47.645 [2024-11-19 00:08:54.234932] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.645 [2024-11-19 00:08:54.235041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.645 [2024-11-19 00:08:54.235069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:47.645 [2024-11-19 00:08:54.240585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.645 [2024-11-19 00:08:54.240735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.645 [2024-11-19 00:08:54.240762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:47.645 [2024-11-19 00:08:54.246469] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.645 [2024-11-19 00:08:54.246573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.645 [2024-11-19 00:08:54.246601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:47.645 [2024-11-19 00:08:54.252132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.645 [2024-11-19 00:08:54.252238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.645 [2024-11-19 00:08:54.252266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:47.645 [2024-11-19 00:08:54.258066] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.645 [2024-11-19 00:08:54.258172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.645 [2024-11-19 00:08:54.258200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:47.645 [2024-11-19 00:08:54.263708] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.645 [2024-11-19 00:08:54.263817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.645 [2024-11-19 00:08:54.263845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:47.645 [2024-11-19 00:08:54.269608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.645 [2024-11-19 00:08:54.269715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.645 [2024-11-19 00:08:54.269743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:47.645 [2024-11-19 00:08:54.275283] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.645 [2024-11-19 00:08:54.275375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.645 [2024-11-19 00:08:54.275403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:47.645 [2024-11-19 00:08:54.281131] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.645 [2024-11-19 00:08:54.281229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.645 [2024-11-19 00:08:54.281256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:47.645 [2024-11-19 00:08:54.286858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.645 [2024-11-19 00:08:54.286966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.645 [2024-11-19 00:08:54.286994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:47.645 [2024-11-19 00:08:54.292696] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.645 [2024-11-19 00:08:54.292802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.645 [2024-11-19 00:08:54.292831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:47.645 [2024-11-19 00:08:54.298398] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.645 [2024-11-19 00:08:54.298488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.645 [2024-11-19 00:08:54.298515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:47.645 [2024-11-19 00:08:54.304370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.645 [2024-11-19 00:08:54.304463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.645 [2024-11-19 00:08:54.304491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:47.645 [2024-11-19 00:08:54.310161] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.645 [2024-11-19 00:08:54.310258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.645 [2024-11-19 00:08:54.310286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:47.645 [2024-11-19 00:08:54.316040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.645 [2024-11-19 00:08:54.316146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.645 [2024-11-19 00:08:54.316173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:47.645 [2024-11-19 00:08:54.321748] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.645 [2024-11-19 00:08:54.321854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.645 [2024-11-19 00:08:54.321881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:47.907 [2024-11-19 00:08:54.328357] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.907 [2024-11-19 00:08:54.328464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.907 [2024-11-19 00:08:54.328527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:47.907 [2024-11-19 00:08:54.334525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.907 [2024-11-19 00:08:54.334643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.907 [2024-11-19 00:08:54.334671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:47.907 [2024-11-19 00:08:54.340794] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.907 [2024-11-19 00:08:54.340899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.907 [2024-11-19 00:08:54.340928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:47.907 [2024-11-19 00:08:54.346472] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.907 [2024-11-19 00:08:54.346581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.907 [2024-11-19 00:08:54.346608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:47.907 [2024-11-19 00:08:54.352205] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.907 [2024-11-19 00:08:54.352358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.907 [2024-11-19 00:08:54.352389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:47.907 [2024-11-19 00:08:54.358003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.907 [2024-11-19 00:08:54.358105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.907 [2024-11-19 00:08:54.358132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:47.907 [2024-11-19 00:08:54.363688] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.907 [2024-11-19 00:08:54.363801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.907 [2024-11-19 00:08:54.363828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:47.907 [2024-11-19 00:08:54.369547] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.907 [2024-11-19 00:08:54.369682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.907 [2024-11-19 00:08:54.369710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:47.907 [2024-11-19 00:08:54.375372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.907 [2024-11-19 00:08:54.375474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.907 [2024-11-19 00:08:54.375502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:47.907 [2024-11-19 00:08:54.381057] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.907 [2024-11-19 00:08:54.381159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.907 [2024-11-19 00:08:54.381186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:47.907 [2024-11-19 00:08:54.386870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.907 [2024-11-19 00:08:54.386964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.907 [2024-11-19 00:08:54.386992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:47.907 [2024-11-19 00:08:54.392575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.907 [2024-11-19 00:08:54.392761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.907 [2024-11-19 00:08:54.392789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:47.907 [2024-11-19 00:08:54.398401] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.907 [2024-11-19 00:08:54.398506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.907 [2024-11-19 00:08:54.398534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:47.907 [2024-11-19 00:08:54.404450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.907 [2024-11-19 00:08:54.404576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.907 [2024-11-19 00:08:54.404605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:47.907 [2024-11-19 00:08:54.410546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.907 [2024-11-19 00:08:54.410667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.907 [2024-11-19 00:08:54.410695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:47.907 [2024-11-19 00:08:54.416395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.907 [2024-11-19 00:08:54.416493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.907 [2024-11-19 00:08:54.416523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:47.907 [2024-11-19 00:08:54.422186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.907 [2024-11-19 00:08:54.422297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.907 [2024-11-19 00:08:54.422325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:47.907 [2024-11-19 00:08:54.427820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.907 [2024-11-19 00:08:54.427921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.907 [2024-11-19 00:08:54.427948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:47.907 [2024-11-19 00:08:54.433740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.907 [2024-11-19 00:08:54.433846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.907 [2024-11-19 00:08:54.433875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:47.907 [2024-11-19 00:08:54.439431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.907 [2024-11-19 00:08:54.439530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.907 [2024-11-19 00:08:54.439557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:47.907 [2024-11-19 00:08:54.445229] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.907 [2024-11-19 00:08:54.445327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.907 [2024-11-19 00:08:54.445356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:47.907 [2024-11-19 00:08:54.450823] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.907 [2024-11-19 00:08:54.450927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.907 [2024-11-19 00:08:54.450954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:47.907 [2024-11-19 00:08:54.456684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.907 [2024-11-19 00:08:54.456774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.907 [2024-11-19 00:08:54.456802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:47.907 [2024-11-19 00:08:54.462374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.907 [2024-11-19 00:08:54.462469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.907 [2024-11-19 00:08:54.462496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:47.907 [2024-11-19 00:08:54.468114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.907 [2024-11-19 00:08:54.468233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.907 [2024-11-19 00:08:54.468261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:47.907 [2024-11-19 00:08:54.473821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.908 [2024-11-19 00:08:54.473929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.908 [2024-11-19 00:08:54.473956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:47.908 [2024-11-19 00:08:54.479637] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.908 [2024-11-19 00:08:54.479738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.908 [2024-11-19 00:08:54.479767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:47.908 [2024-11-19 00:08:54.485304] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.908 [2024-11-19 00:08:54.485395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.908 [2024-11-19 00:08:54.485423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:47.908 [2024-11-19 00:08:54.491017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.908 [2024-11-19 00:08:54.491125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.908 [2024-11-19 00:08:54.491153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:47.908 [2024-11-19 00:08:54.496789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.908 [2024-11-19 00:08:54.496898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.908 [2024-11-19 00:08:54.496925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:47.908 [2024-11-19 00:08:54.502634] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.908 [2024-11-19 00:08:54.502743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.908 [2024-11-19 00:08:54.502771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:47.908 [2024-11-19 00:08:54.508237] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.908 [2024-11-19 00:08:54.508388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.908 [2024-11-19 00:08:54.508416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:47.908 [2024-11-19 00:08:54.514139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.908 [2024-11-19 00:08:54.514248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.908 [2024-11-19 00:08:54.514275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:47.908 [2024-11-19 00:08:54.519776] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.908 [2024-11-19 00:08:54.519874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.908 [2024-11-19 00:08:54.519902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:47.908 [2024-11-19 00:08:54.525673] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.908 [2024-11-19 00:08:54.525767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.908 [2024-11-19 00:08:54.525795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:47.908 [2024-11-19 00:08:54.531300] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.908 [2024-11-19 00:08:54.531408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.908 [2024-11-19 00:08:54.531435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:47.908 [2024-11-19 00:08:54.537258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.908 [2024-11-19 00:08:54.537347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.908 [2024-11-19 00:08:54.537375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:47.908 [2024-11-19 00:08:54.542911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.908 [2024-11-19 00:08:54.543023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.908 [2024-11-19 00:08:54.543051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:47.908 [2024-11-19 00:08:54.548832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.908 [2024-11-19 00:08:54.548935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.908 [2024-11-19 00:08:54.548963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:47.908 [2024-11-19 00:08:54.554469] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.908 [2024-11-19 00:08:54.554567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.908 [2024-11-19 00:08:54.554595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:47.908 [2024-11-19 00:08:54.560127] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.908 [2024-11-19 00:08:54.560234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.908 [2024-11-19 00:08:54.560262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:47.908 [2024-11-19 00:08:54.565822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.908 [2024-11-19 00:08:54.565923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.908 [2024-11-19 00:08:54.565950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:47.908 [2024-11-19 00:08:54.571538] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.908 [2024-11-19 00:08:54.571656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.908 [2024-11-19 00:08:54.571683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:47.908 [2024-11-19 00:08:54.577167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.908 [2024-11-19 00:08:54.577265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.908 [2024-11-19 00:08:54.577292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:47.908 [2024-11-19 00:08:54.582975] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.908 [2024-11-19 00:08:54.583082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.908 [2024-11-19 00:08:54.583110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:47.908 [2024-11-19 00:08:54.588805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:47.908 [2024-11-19 00:08:54.588998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.908 [2024-11-19 00:08:54.589043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:48.169 [2024-11-19 00:08:54.595334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:48.169 [2024-11-19 00:08:54.595471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.169 [2024-11-19 00:08:54.595498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:48.169 [2024-11-19 00:08:54.601573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:48.169 [2024-11-19 00:08:54.601705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.169 [2024-11-19 00:08:54.601732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:48.169 [2024-11-19 00:08:54.607412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:48.169 [2024-11-19 00:08:54.607517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.169 [2024-11-19 00:08:54.607545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:48.169 [2024-11-19 00:08:54.613189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:48.169 [2024-11-19 00:08:54.613289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.169 [2024-11-19 00:08:54.613316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:48.169 [2024-11-19 00:08:54.618949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:48.169 [2024-11-19 00:08:54.619066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.169 [2024-11-19 00:08:54.619093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:48.169 [2024-11-19 00:08:54.624669] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:48.169 [2024-11-19 00:08:54.624771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.169 [2024-11-19 00:08:54.624797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:48.169 [2024-11-19 00:08:54.630485] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:48.169 [2024-11-19 00:08:54.630574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.169 [2024-11-19 00:08:54.630618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:48.169 [2024-11-19 00:08:54.636206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:48.169 [2024-11-19 00:08:54.636350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.169 [2024-11-19 00:08:54.636378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:48.169 [2024-11-19 00:08:54.642130] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:48.169 [2024-11-19 00:08:54.642223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.170 [2024-11-19 00:08:54.642250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:48.170 [2024-11-19 00:08:54.647789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:48.170 [2024-11-19 00:08:54.647887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.170 [2024-11-19 00:08:54.647915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:48.170 [2024-11-19 00:08:54.653727] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:48.170 [2024-11-19 00:08:54.653831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.170 [2024-11-19 00:08:54.653857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:48.170 [2024-11-19 00:08:54.659498] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:48.170 [2024-11-19 00:08:54.659611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.170 [2024-11-19 00:08:54.659672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:48.170 [2024-11-19 00:08:54.665248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:48.170 [2024-11-19 00:08:54.665348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.170 [2024-11-19 00:08:54.665376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:48.170 [2024-11-19 00:08:54.671186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:48.170 [2024-11-19 00:08:54.671290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.170 [2024-11-19 00:08:54.671317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:48.170 [2024-11-19 00:08:54.676900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:48.170 [2024-11-19 00:08:54.677006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.170 [2024-11-19 00:08:54.677033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:48.170 [2024-11-19 00:08:54.682630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:48.170 [2024-11-19 00:08:54.682735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.170 [2024-11-19 00:08:54.682763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:48.170 [2024-11-19 00:08:54.688355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:48.170 [2024-11-19 00:08:54.688484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.170 [2024-11-19 00:08:54.688512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:48.170 [2024-11-19 00:08:54.694218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:48.170 [2024-11-19 00:08:54.694311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.170 [2024-11-19 00:08:54.694338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:48.170 [2024-11-19 00:08:54.699929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:48.170 [2024-11-19 00:08:54.700026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.170 [2024-11-19 00:08:54.700054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:48.170 [2024-11-19 00:08:54.705823] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:48.170 [2024-11-19 00:08:54.705908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.170 [2024-11-19 00:08:54.705935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:48.170 [2024-11-19 00:08:54.711493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:48.170 [2024-11-19 00:08:54.711593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.170 [2024-11-19 00:08:54.711619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:48.170 [2024-11-19 00:08:54.717219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:48.170 [2024-11-19 00:08:54.717311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.170 [2024-11-19 00:08:54.717338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:48.170 [2024-11-19 00:08:54.722906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:48.170 [2024-11-19 00:08:54.723041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.170 [2024-11-19 00:08:54.723067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:48.170 [2024-11-19 00:08:54.728753] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:48.170 [2024-11-19 00:08:54.728853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.170 [2024-11-19 00:08:54.728880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:48.170 [2024-11-19 00:08:54.734360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:48.170 [2024-11-19 00:08:54.734467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.170 [2024-11-19 00:08:54.734495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:48.170 [2024-11-19 00:08:54.740229] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:48.170 [2024-11-19 00:08:54.740373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.170 [2024-11-19 00:08:54.740402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:48.170 [2024-11-19 00:08:54.746021] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:48.170 [2024-11-19 00:08:54.746112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.170 [2024-11-19 00:08:54.746139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:48.170 [2024-11-19 00:08:54.751800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:48.170 [2024-11-19 00:08:54.751908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.170 [2024-11-19 00:08:54.751936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:48.170 [2024-11-19 00:08:54.757657] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:48.170 [2024-11-19 00:08:54.757766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.170 [2024-11-19 00:08:54.757793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:48.170 [2024-11-19 00:08:54.763342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:48.170 [2024-11-19 00:08:54.763451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.170 [2024-11-19 00:08:54.763479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:48.170 [2024-11-19 00:08:54.769097] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:48.170 [2024-11-19 00:08:54.769190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.170 [2024-11-19 00:08:54.769217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:48.171 [2024-11-19 00:08:54.774829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:48.171 [2024-11-19 00:08:54.774933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.171 [2024-11-19 00:08:54.774960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:48.171 [2024-11-19 00:08:54.780522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:48.171 [2024-11-19 00:08:54.780633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.171 [2024-11-19 00:08:54.780690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:48.171 [2024-11-19 00:08:54.786372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:48.171 [2024-11-19 00:08:54.786478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.171 [2024-11-19 00:08:54.786505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:48.171 [2024-11-19 00:08:54.792055] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:48.171 [2024-11-19 00:08:54.792158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.171 [2024-11-19 00:08:54.792184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:48.171 [2024-11-19 00:08:54.797873] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:48.171 [2024-11-19 00:08:54.797966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.171 [2024-11-19 00:08:54.797994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:48.171 [2024-11-19 00:08:54.803479] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:48.171 [2024-11-19 00:08:54.803584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.171 [2024-11-19 00:08:54.803655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:48.171 [2024-11-19 00:08:54.809241] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:48.171 [2024-11-19 00:08:54.809338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.171 [2024-11-19 00:08:54.809366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:48.171 [2024-11-19 00:08:54.814847] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:48.171 [2024-11-19 00:08:54.814950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.171 [2024-11-19 00:08:54.814976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:48.171 [2024-11-19 00:08:54.820710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:48.171 [2024-11-19 00:08:54.820801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.171 [2024-11-19 00:08:54.820828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:48.171 [2024-11-19 00:08:54.826321] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:48.171 [2024-11-19 00:08:54.826418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.171 [2024-11-19 00:08:54.826446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:48.171 [2024-11-19 00:08:54.831987] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:48.171 [2024-11-19 00:08:54.832085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.171 [2024-11-19 00:08:54.832112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:48.171 [2024-11-19 00:08:54.837722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:48.171 [2024-11-19 00:08:54.837834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.171 [2024-11-19 00:08:54.837863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:48.171 [2024-11-19 00:08:54.843523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:48.171 [2024-11-19 00:08:54.843687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.171 [2024-11-19 00:08:54.843746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:48.171 [2024-11-19 00:08:54.849815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:48.171 [2024-11-19 00:08:54.849926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.171 [2024-11-19 00:08:54.849985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:48.432 [2024-11-19 00:08:54.857146] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:48.432 [2024-11-19 00:08:54.857294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.432 [2024-11-19 00:08:54.857339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:48.432 [2024-11-19 00:08:54.864419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:48.432 [2024-11-19 00:08:54.864527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.432 [2024-11-19 00:08:54.864559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:48.432 [2024-11-19 00:08:54.871086] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:48.432 [2024-11-19 00:08:54.871197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.432 [2024-11-19 00:08:54.871224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:48.432 [2024-11-19 00:08:54.877295] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:48.432 [2024-11-19 00:08:54.877402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.432 [2024-11-19 00:08:54.877429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:48.432 [2024-11-19 00:08:54.883386] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:48.432 [2024-11-19 00:08:54.883493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.432 [2024-11-19 00:08:54.883520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:48.432 [2024-11-19 00:08:54.889190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:48.432 [2024-11-19 00:08:54.889282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.432 [2024-11-19 00:08:54.889310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:48.432 [2024-11-19 00:08:54.894946] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:48.432 [2024-11-19 00:08:54.895055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.432 [2024-11-19 00:08:54.895083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:48.432 [2024-11-19 00:08:54.900693] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:48.432 [2024-11-19 00:08:54.900782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.432 [2024-11-19 00:08:54.900810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:48.432 [2024-11-19 00:08:54.906458] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:48.432 [2024-11-19 00:08:54.906560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.432 [2024-11-19 00:08:54.906587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:48.432 [2024-11-19 00:08:54.912180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:48.432 [2024-11-19 00:08:54.912270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.432 [2024-11-19 00:08:54.912339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:48.432 [2024-11-19 00:08:54.918141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:48.432 [2024-11-19 00:08:54.918242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.432 [2024-11-19 00:08:54.918270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:48.432 [2024-11-19 00:08:54.923748] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:48.432 [2024-11-19 00:08:54.923849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.432 [2024-11-19 00:08:54.923875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:48.432 [2024-11-19 00:08:54.929681] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:48.432 [2024-11-19 00:08:54.929790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.433 [2024-11-19 00:08:54.929818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:48.433 [2024-11-19 00:08:54.935310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:48.433 [2024-11-19 00:08:54.935412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.433 [2024-11-19 00:08:54.935439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:48.433 [2024-11-19 00:08:54.941107] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:48.433 [2024-11-19 00:08:54.941196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.433 [2024-11-19 00:08:54.941224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:48.433 [2024-11-19 00:08:54.946758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:48.433 [2024-11-19 00:08:54.946856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.433 [2024-11-19 00:08:54.946883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:48.433 [2024-11-19 00:08:54.952548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:48.433 [2024-11-19 00:08:54.952688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.433 [2024-11-19 00:08:54.952716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:48.433 [2024-11-19 00:08:54.958400] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:48.433 [2024-11-19 00:08:54.958503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.433 [2024-11-19 00:08:54.958531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:48.433 [2024-11-19 00:08:54.964426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:48.433 [2024-11-19 00:08:54.964521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.433 [2024-11-19 00:08:54.964551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:48.433 [2024-11-19 00:08:54.970204] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:24:48.433 5255.50 IOPS, 656.94 MiB/s [2024-11-19T00:08:55.125Z] [2024-11-19 00:08:54.971680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.433 [2024-11-19 00:08:54.971731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:48.433 00:24:48.433 Latency(us) 00:24:48.433 [2024-11-19T00:08:55.125Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:48.433 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:24:48.433 nvme0n1 : 2.00 5254.72 656.84 0.00 0.00 3037.15 1980.97 10426.18 00:24:48.433 [2024-11-19T00:08:55.125Z] =================================================================================================================== 00:24:48.433 [2024-11-19T00:08:55.125Z] Total : 5254.72 656.84 0.00 0.00 3037.15 1980.97 10426.18 00:24:48.433 { 00:24:48.433 "results": [ 00:24:48.433 { 00:24:48.433 "job": "nvme0n1", 00:24:48.433 "core_mask": "0x2", 00:24:48.433 "workload": "randwrite", 00:24:48.433 "status": "finished", 00:24:48.433 "queue_depth": 16, 00:24:48.433 "io_size": 131072, 00:24:48.433 "runtime": 2.004485, 00:24:48.433 "iops": 5254.716298700165, 00:24:48.433 "mibps": 656.8395373375206, 00:24:48.433 "io_failed": 0, 00:24:48.433 "io_timeout": 0, 00:24:48.433 "avg_latency_us": 3037.153084591285, 00:24:48.433 "min_latency_us": 1980.9745454545455, 00:24:48.433 "max_latency_us": 10426.181818181818 00:24:48.433 } 00:24:48.433 ], 00:24:48.433 "core_count": 1 00:24:48.433 } 00:24:48.433 00:08:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:48.433 00:08:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:48.433 00:08:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:48.433 00:08:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:48.433 | .driver_specific 00:24:48.433 | .nvme_error 00:24:48.433 | .status_code 00:24:48.433 | .command_transient_transport_error' 00:24:48.693 00:08:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 340 > 0 )) 00:24:48.693 00:08:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 86565 00:24:48.693 00:08:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 86565 ']' 00:24:48.693 00:08:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 86565 00:24:48.693 00:08:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:24:48.693 00:08:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:48.693 00:08:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86565 00:24:48.693 00:08:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:48.693 killing process with pid 86565 00:24:48.693 00:08:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:48.693 00:08:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86565' 00:24:48.693 Received shutdown signal, test time was about 2.000000 seconds 00:24:48.693 00:24:48.693 Latency(us) 00:24:48.693 [2024-11-19T00:08:55.385Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:48.693 [2024-11-19T00:08:55.385Z] =================================================================================================================== 00:24:48.693 [2024-11-19T00:08:55.385Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:48.693 00:08:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 86565 00:24:48.693 00:08:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 86565 00:24:49.631 00:08:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 86341 00:24:49.631 00:08:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 86341 ']' 00:24:49.631 00:08:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 86341 00:24:49.631 00:08:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:24:49.631 00:08:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:49.631 00:08:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86341 00:24:49.631 00:08:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:49.631 killing process with pid 86341 00:24:49.631 00:08:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:49.631 00:08:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86341' 00:24:49.632 00:08:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 86341 00:24:49.632 00:08:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 86341 00:24:50.569 00:24:50.569 real 0m21.252s 00:24:50.569 user 0m40.713s 00:24:50.569 sys 0m4.544s 00:24:50.569 ************************************ 00:24:50.569 00:08:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:50.569 00:08:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:50.569 END TEST nvmf_digest_error 00:24:50.569 ************************************ 00:24:50.569 00:08:56 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:24:50.569 00:08:56 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:24:50.569 00:08:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:50.569 00:08:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:24:50.569 00:08:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:50.569 00:08:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:24:50.569 00:08:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:50.569 00:08:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:50.569 rmmod nvme_tcp 00:24:50.569 rmmod nvme_fabrics 00:24:50.569 rmmod nvme_keyring 00:24:50.569 00:08:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:50.569 00:08:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:24:50.569 00:08:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:24:50.569 00:08:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 86341 ']' 00:24:50.569 00:08:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 86341 00:24:50.569 00:08:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 86341 ']' 00:24:50.569 00:08:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 86341 00:24:50.569 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (86341) - No such process 00:24:50.569 Process with pid 86341 is not found 00:24:50.569 00:08:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 86341 is not found' 00:24:50.569 00:08:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:50.569 00:08:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:50.569 00:08:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:50.569 00:08:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:24:50.569 00:08:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:50.569 00:08:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:24:50.569 00:08:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:24:50.569 00:08:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:50.569 00:08:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:24:50.569 00:08:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:24:50.569 00:08:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:24:50.569 00:08:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:24:50.569 00:08:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:24:50.569 00:08:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:24:50.569 00:08:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:24:50.569 00:08:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:24:50.569 00:08:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:24:50.569 00:08:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:24:50.569 00:08:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:24:50.569 00:08:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:24:50.569 00:08:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:50.829 00:08:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:50.829 00:08:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@246 -- # remove_spdk_ns 00:24:50.829 00:08:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:50.829 00:08:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:50.829 00:08:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:50.829 00:08:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@300 -- # return 0 00:24:50.829 00:24:50.829 real 0m44.667s 00:24:50.829 user 1m24.094s 00:24:50.829 sys 0m9.477s 00:24:50.829 00:08:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:50.829 00:08:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:50.829 ************************************ 00:24:50.829 END TEST nvmf_digest 00:24:50.829 ************************************ 00:24:50.829 00:08:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:24:50.829 00:08:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 1 -eq 1 ]] 00:24:50.829 00:08:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@42 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:24:50.829 00:08:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:50.829 00:08:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:50.829 00:08:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.829 ************************************ 00:24:50.829 START TEST nvmf_host_multipath 00:24:50.829 ************************************ 00:24:50.829 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:24:50.829 * Looking for test storage... 00:24:50.829 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:50.829 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:50.829 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:24:50.829 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:51.090 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:51.090 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:51.090 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:51.090 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:51.090 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:24:51.090 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:24:51.090 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:24:51.090 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:24:51.090 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:24:51.090 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:24:51.090 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:24:51.090 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:51.090 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@344 -- # case "$op" in 00:24:51.090 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@345 -- # : 1 00:24:51.090 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:51.090 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:51.090 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # decimal 1 00:24:51.090 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=1 00:24:51.090 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:51.090 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 1 00:24:51.090 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:24:51.090 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # decimal 2 00:24:51.090 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=2 00:24:51.090 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:51.090 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 2 00:24:51.090 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:24:51.090 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:51.090 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:51.090 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # return 0 00:24:51.090 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:51.090 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:51.090 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:51.090 --rc genhtml_branch_coverage=1 00:24:51.090 --rc genhtml_function_coverage=1 00:24:51.090 --rc genhtml_legend=1 00:24:51.090 --rc geninfo_all_blocks=1 00:24:51.090 --rc geninfo_unexecuted_blocks=1 00:24:51.090 00:24:51.090 ' 00:24:51.090 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:51.090 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:51.090 --rc genhtml_branch_coverage=1 00:24:51.090 --rc genhtml_function_coverage=1 00:24:51.090 --rc genhtml_legend=1 00:24:51.090 --rc geninfo_all_blocks=1 00:24:51.090 --rc geninfo_unexecuted_blocks=1 00:24:51.090 00:24:51.090 ' 00:24:51.090 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:51.090 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:51.090 --rc genhtml_branch_coverage=1 00:24:51.090 --rc genhtml_function_coverage=1 00:24:51.090 --rc genhtml_legend=1 00:24:51.090 --rc geninfo_all_blocks=1 00:24:51.090 --rc geninfo_unexecuted_blocks=1 00:24:51.090 00:24:51.090 ' 00:24:51.090 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:51.090 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:51.090 --rc genhtml_branch_coverage=1 00:24:51.090 --rc genhtml_function_coverage=1 00:24:51.090 --rc genhtml_legend=1 00:24:51.090 --rc geninfo_all_blocks=1 00:24:51.090 --rc geninfo_unexecuted_blocks=1 00:24:51.090 00:24:51.090 ' 00:24:51.090 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:51.090 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:24:51.090 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:51.090 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:51.090 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:51.090 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:51.090 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:51.090 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:51.090 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:51.090 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:51.090 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:51.090 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:51.090 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:24:51.090 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:24:51.090 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:51.090 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:51.090 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:51.090 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:51.090 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:51.090 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:24:51.090 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:51.090 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:51.090 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:51.090 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:51.091 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:51.091 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:51.091 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:24:51.091 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:51.091 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@51 -- # : 0 00:24:51.091 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:51.091 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:51.091 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:51.091 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:51.091 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:51.091 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:51.091 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:51.091 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:51.091 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:51.091 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:51.091 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:51.091 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:51.091 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:51.091 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:24:51.091 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:51.091 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:24:51.091 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:24:51.091 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:51.091 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:51.091 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:51.091 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:51.091 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:51.091 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:51.091 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:51.091 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:51.091 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:24:51.091 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:24:51.091 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:24:51.091 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:24:51.091 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:24:51.091 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:24:51.091 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:51.091 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:24:51.091 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:24:51.091 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:24:51.091 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:51.091 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:24:51.091 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:51.091 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:24:51.091 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:51.091 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:24:51.091 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:51.091 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:51.091 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:51.091 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:51.091 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:51.091 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:51.091 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:24:51.091 Cannot find device "nvmf_init_br" 00:24:51.091 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:24:51.091 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:24:51.091 Cannot find device "nvmf_init_br2" 00:24:51.091 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:24:51.091 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:24:51.091 Cannot find device "nvmf_tgt_br" 00:24:51.091 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # true 00:24:51.091 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:24:51.091 Cannot find device "nvmf_tgt_br2" 00:24:51.091 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # true 00:24:51.091 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:24:51.091 Cannot find device "nvmf_init_br" 00:24:51.091 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # true 00:24:51.091 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:24:51.091 Cannot find device "nvmf_init_br2" 00:24:51.091 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # true 00:24:51.091 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:24:51.091 Cannot find device "nvmf_tgt_br" 00:24:51.091 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # true 00:24:51.091 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:24:51.091 Cannot find device "nvmf_tgt_br2" 00:24:51.091 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # true 00:24:51.091 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:24:51.091 Cannot find device "nvmf_br" 00:24:51.091 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # true 00:24:51.091 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:24:51.091 Cannot find device "nvmf_init_if" 00:24:51.091 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # true 00:24:51.091 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:24:51.091 Cannot find device "nvmf_init_if2" 00:24:51.091 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # true 00:24:51.091 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:51.091 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:51.091 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # true 00:24:51.091 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:51.091 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:51.091 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # true 00:24:51.091 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:24:51.091 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:51.091 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:24:51.091 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:51.091 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:51.091 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:51.091 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:51.350 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:51.350 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:24:51.350 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:24:51.350 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:24:51.350 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:24:51.350 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:24:51.350 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:24:51.350 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:24:51.350 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:24:51.350 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:24:51.350 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:51.350 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:51.350 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:51.350 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:24:51.350 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:24:51.350 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:24:51.350 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:24:51.350 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:51.350 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:51.350 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:51.350 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:24:51.350 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:24:51.350 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:24:51.350 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:51.350 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:24:51.350 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:24:51.350 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:51.350 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:24:51.350 00:24:51.350 --- 10.0.0.3 ping statistics --- 00:24:51.350 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:51.350 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:24:51.350 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:24:51.350 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:24:51.350 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.064 ms 00:24:51.350 00:24:51.350 --- 10.0.0.4 ping statistics --- 00:24:51.350 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:51.350 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:24:51.350 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:51.350 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:51.350 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:24:51.350 00:24:51.350 --- 10.0.0.1 ping statistics --- 00:24:51.350 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:51.350 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:24:51.350 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:24:51.350 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:51.350 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.093 ms 00:24:51.350 00:24:51.350 --- 10.0.0.2 ping statistics --- 00:24:51.350 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:51.350 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:24:51.350 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:51.350 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@461 -- # return 0 00:24:51.350 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:51.350 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:51.350 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:51.350 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:51.350 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:51.350 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:51.350 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:51.350 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:24:51.350 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:51.350 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:51.350 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:24:51.350 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@509 -- # nvmfpid=86909 00:24:51.350 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:24:51.350 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@510 -- # waitforlisten 86909 00:24:51.350 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 86909 ']' 00:24:51.350 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:51.350 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:51.350 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:51.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:51.350 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:51.350 00:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:24:51.609 [2024-11-19 00:08:58.116589] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:24:51.609 [2024-11-19 00:08:58.116767] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:51.868 [2024-11-19 00:08:58.313077] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:51.868 [2024-11-19 00:08:58.439640] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:51.868 [2024-11-19 00:08:58.439711] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:51.868 [2024-11-19 00:08:58.439735] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:51.868 [2024-11-19 00:08:58.439766] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:51.868 [2024-11-19 00:08:58.439784] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:51.868 [2024-11-19 00:08:58.441878] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:51.868 [2024-11-19 00:08:58.441896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:52.126 [2024-11-19 00:08:58.593487] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:52.694 00:08:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:52.694 00:08:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 00:24:52.694 00:08:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:52.694 00:08:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:52.694 00:08:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:24:52.694 00:08:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:52.695 00:08:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=86909 00:24:52.695 00:08:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:52.956 [2024-11-19 00:08:59.401782] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:52.956 00:08:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:53.215 Malloc0 00:24:53.215 00:08:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:24:53.474 00:09:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:53.733 00:09:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:24:53.992 [2024-11-19 00:09:00.451510] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:53.992 00:09:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:24:53.992 [2024-11-19 00:09:00.675569] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:24:54.251 00:09:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:24:54.251 00:09:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=86960 00:24:54.251 00:09:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:54.251 00:09:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 86960 /var/tmp/bdevperf.sock 00:24:54.251 00:09:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 86960 ']' 00:24:54.251 00:09:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:54.251 00:09:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:54.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:54.251 00:09:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:54.251 00:09:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:54.251 00:09:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:24:55.188 00:09:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:55.188 00:09:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 00:24:55.188 00:09:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:24:55.448 00:09:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:24:55.707 Nvme0n1 00:24:55.707 00:09:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:24:55.966 Nvme0n1 00:24:55.966 00:09:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:24:55.966 00:09:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:24:57.345 00:09:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:24:57.345 00:09:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:24:57.345 00:09:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:24:57.603 00:09:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:24:57.604 00:09:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=87006 00:24:57.604 00:09:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:24:57.604 00:09:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 86909 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:25:04.172 00:09:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:25:04.172 00:09:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:25:04.172 00:09:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:25:04.172 00:09:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:04.172 Attaching 4 probes... 00:25:04.172 @path[10.0.0.3, 4421]: 12829 00:25:04.172 @path[10.0.0.3, 4421]: 13304 00:25:04.172 @path[10.0.0.3, 4421]: 13186 00:25:04.172 @path[10.0.0.3, 4421]: 13201 00:25:04.172 @path[10.0.0.3, 4421]: 13257 00:25:04.172 00:09:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:25:04.172 00:09:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:25:04.172 00:09:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:25:04.172 00:09:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:25:04.172 00:09:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:25:04.172 00:09:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:25:04.172 00:09:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 87006 00:25:04.172 00:09:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:04.172 00:09:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:25:04.172 00:09:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:25:04.172 00:09:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:25:04.432 00:09:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:25:04.432 00:09:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 86909 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:25:04.432 00:09:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=87119 00:25:04.432 00:09:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:25:11.040 00:09:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:25:11.040 00:09:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:25:11.040 00:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:25:11.040 00:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:11.040 Attaching 4 probes... 00:25:11.040 @path[10.0.0.3, 4420]: 15955 00:25:11.040 @path[10.0.0.3, 4420]: 16092 00:25:11.040 @path[10.0.0.3, 4420]: 16106 00:25:11.040 @path[10.0.0.3, 4420]: 16341 00:25:11.040 @path[10.0.0.3, 4420]: 16277 00:25:11.040 00:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:25:11.040 00:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:25:11.040 00:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:25:11.040 00:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:25:11.040 00:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:25:11.040 00:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:25:11.040 00:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 87119 00:25:11.040 00:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:11.040 00:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:25:11.040 00:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:25:11.040 00:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:25:11.299 00:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:25:11.299 00:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=87232 00:25:11.299 00:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:25:11.299 00:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 86909 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:25:17.866 00:09:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:25:17.866 00:09:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:25:17.866 00:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:25:17.867 00:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:17.867 Attaching 4 probes... 00:25:17.867 @path[10.0.0.3, 4421]: 11959 00:25:17.867 @path[10.0.0.3, 4421]: 16163 00:25:17.867 @path[10.0.0.3, 4421]: 16196 00:25:17.867 @path[10.0.0.3, 4421]: 16144 00:25:17.867 @path[10.0.0.3, 4421]: 16216 00:25:17.867 00:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:25:17.867 00:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:25:17.867 00:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:25:17.867 00:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:25:17.867 00:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:25:17.867 00:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:25:17.867 00:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 87232 00:25:17.867 00:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:17.867 00:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:25:17.867 00:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:25:17.867 00:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:25:18.126 00:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:25:18.126 00:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 86909 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:25:18.126 00:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=87350 00:25:18.126 00:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:25:24.694 00:09:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:25:24.694 00:09:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:25:24.694 00:09:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:25:24.694 00:09:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:24.694 Attaching 4 probes... 00:25:24.694 00:25:24.694 00:25:24.694 00:25:24.694 00:25:24.694 00:25:24.694 00:09:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:25:24.694 00:09:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:25:24.694 00:09:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:25:24.694 00:09:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:25:24.694 00:09:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:25:24.694 00:09:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:25:24.694 00:09:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 87350 00:25:24.694 00:09:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:24.694 00:09:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:25:24.694 00:09:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:25:24.694 00:09:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:25:24.953 00:09:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:25:24.953 00:09:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=87457 00:25:24.953 00:09:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 86909 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:25:24.953 00:09:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:25:31.523 00:09:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:25:31.523 00:09:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:25:31.523 00:09:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:25:31.523 00:09:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:31.523 Attaching 4 probes... 00:25:31.523 @path[10.0.0.3, 4421]: 15923 00:25:31.523 @path[10.0.0.3, 4421]: 15986 00:25:31.523 @path[10.0.0.3, 4421]: 15973 00:25:31.523 @path[10.0.0.3, 4421]: 15970 00:25:31.523 @path[10.0.0.3, 4421]: 16040 00:25:31.523 00:09:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:25:31.523 00:09:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:25:31.523 00:09:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:25:31.523 00:09:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:25:31.523 00:09:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:25:31.523 00:09:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:25:31.523 00:09:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 87457 00:25:31.523 00:09:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:31.523 00:09:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:25:31.523 00:09:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:25:32.461 00:09:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:25:32.461 00:09:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=87581 00:25:32.461 00:09:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 86909 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:25:32.461 00:09:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:25:39.032 00:09:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:25:39.032 00:09:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:25:39.032 00:09:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:25:39.032 00:09:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:39.032 Attaching 4 probes... 00:25:39.032 @path[10.0.0.3, 4420]: 15467 00:25:39.032 @path[10.0.0.3, 4420]: 15782 00:25:39.032 @path[10.0.0.3, 4420]: 15771 00:25:39.032 @path[10.0.0.3, 4420]: 15710 00:25:39.032 @path[10.0.0.3, 4420]: 15707 00:25:39.032 00:09:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:25:39.032 00:09:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:25:39.032 00:09:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:25:39.032 00:09:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:25:39.032 00:09:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:25:39.032 00:09:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:25:39.032 00:09:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 87581 00:25:39.032 00:09:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:39.032 00:09:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:25:39.032 [2024-11-19 00:09:45.593989] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:25:39.032 00:09:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:25:39.291 00:09:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:25:45.862 00:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:25:45.862 00:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=87754 00:25:45.862 00:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 86909 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:25:45.862 00:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:25:52.446 00:09:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:25:52.446 00:09:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:25:52.446 00:09:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:25:52.446 00:09:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:52.446 Attaching 4 probes... 00:25:52.446 @path[10.0.0.3, 4421]: 15506 00:25:52.446 @path[10.0.0.3, 4421]: 15810 00:25:52.446 @path[10.0.0.3, 4421]: 15992 00:25:52.446 @path[10.0.0.3, 4421]: 15804 00:25:52.446 @path[10.0.0.3, 4421]: 15760 00:25:52.446 00:09:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:25:52.446 00:09:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:25:52.446 00:09:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:25:52.446 00:09:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:25:52.446 00:09:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:25:52.446 00:09:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:25:52.446 00:09:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 87754 00:25:52.446 00:09:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:52.446 00:09:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 86960 00:25:52.446 00:09:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 86960 ']' 00:25:52.447 00:09:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 86960 00:25:52.447 00:09:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 00:25:52.447 00:09:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:52.447 00:09:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86960 00:25:52.447 killing process with pid 86960 00:25:52.447 00:09:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:25:52.447 00:09:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:25:52.447 00:09:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86960' 00:25:52.447 00:09:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 86960 00:25:52.447 00:09:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 86960 00:25:52.447 { 00:25:52.447 "results": [ 00:25:52.447 { 00:25:52.447 "job": "Nvme0n1", 00:25:52.447 "core_mask": "0x4", 00:25:52.447 "workload": "verify", 00:25:52.447 "status": "terminated", 00:25:52.447 "verify_range": { 00:25:52.447 "start": 0, 00:25:52.447 "length": 16384 00:25:52.447 }, 00:25:52.447 "queue_depth": 128, 00:25:52.447 "io_size": 4096, 00:25:52.447 "runtime": 55.506475, 00:25:52.447 "iops": 6570.242480719592, 00:25:52.447 "mibps": 25.665009690310907, 00:25:52.447 "io_failed": 0, 00:25:52.447 "io_timeout": 0, 00:25:52.447 "avg_latency_us": 19457.459258345985, 00:25:52.447 "min_latency_us": 409.6, 00:25:52.447 "max_latency_us": 7046430.72 00:25:52.447 } 00:25:52.447 ], 00:25:52.447 "core_count": 1 00:25:52.447 } 00:25:52.447 00:09:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 86960 00:25:52.447 00:09:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:25:52.447 [2024-11-19 00:09:00.772832] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:25:52.447 [2024-11-19 00:09:00.772991] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86960 ] 00:25:52.447 [2024-11-19 00:09:00.942117] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:52.447 [2024-11-19 00:09:01.036403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:52.447 [2024-11-19 00:09:01.190975] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:25:52.447 Running I/O for 90 seconds... 00:25:52.447 6292.00 IOPS, 24.58 MiB/s [2024-11-19T00:09:59.139Z] 6422.00 IOPS, 25.09 MiB/s [2024-11-19T00:09:59.139Z] 6457.67 IOPS, 25.23 MiB/s [2024-11-19T00:09:59.139Z] 6507.25 IOPS, 25.42 MiB/s [2024-11-19T00:09:59.139Z] 6536.80 IOPS, 25.53 MiB/s [2024-11-19T00:09:59.139Z] 6535.50 IOPS, 25.53 MiB/s [2024-11-19T00:09:59.139Z] 6552.71 IOPS, 25.60 MiB/s [2024-11-19T00:09:59.139Z] 6533.62 IOPS, 25.52 MiB/s [2024-11-19T00:09:59.139Z] [2024-11-19 00:09:10.936573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:36320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.447 [2024-11-19 00:09:10.936699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:52.447 [2024-11-19 00:09:10.936785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:36328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.447 [2024-11-19 00:09:10.936814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:52.447 [2024-11-19 00:09:10.936844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:36336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.447 [2024-11-19 00:09:10.936864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:52.447 [2024-11-19 00:09:10.936908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:36344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.447 [2024-11-19 00:09:10.936928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:52.447 [2024-11-19 00:09:10.936955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:36352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.447 [2024-11-19 00:09:10.936974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:52.447 [2024-11-19 00:09:10.937001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:36360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.447 [2024-11-19 00:09:10.937021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:52.447 [2024-11-19 00:09:10.937047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:36368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.447 [2024-11-19 00:09:10.937067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:52.447 [2024-11-19 00:09:10.937093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:36376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.447 [2024-11-19 00:09:10.937113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:52.447 [2024-11-19 00:09:10.937139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:36384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.447 [2024-11-19 00:09:10.937158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:52.447 [2024-11-19 00:09:10.937185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:36392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.447 [2024-11-19 00:09:10.937230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:52.447 [2024-11-19 00:09:10.937260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:36400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.447 [2024-11-19 00:09:10.937281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:52.447 [2024-11-19 00:09:10.937307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:36408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.447 [2024-11-19 00:09:10.937327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:52.447 [2024-11-19 00:09:10.937353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.447 [2024-11-19 00:09:10.937372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:52.447 [2024-11-19 00:09:10.937399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:36160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.447 [2024-11-19 00:09:10.937418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:52.447 [2024-11-19 00:09:10.937446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:36168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.447 [2024-11-19 00:09:10.937465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:52.447 [2024-11-19 00:09:10.937492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:36176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.447 [2024-11-19 00:09:10.937511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:52.447 [2024-11-19 00:09:10.937538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:36184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.447 [2024-11-19 00:09:10.937558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:52.447 [2024-11-19 00:09:10.938961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:36192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.447 [2024-11-19 00:09:10.938999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:52.447 [2024-11-19 00:09:10.939051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:36424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.447 [2024-11-19 00:09:10.939074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:52.447 [2024-11-19 00:09:10.939103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:36432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.447 [2024-11-19 00:09:10.939123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:52.447 [2024-11-19 00:09:10.939149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:36440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.447 [2024-11-19 00:09:10.939169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:52.447 [2024-11-19 00:09:10.939195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.447 [2024-11-19 00:09:10.939226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:52.447 [2024-11-19 00:09:10.939256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:36456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.447 [2024-11-19 00:09:10.939276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:52.447 [2024-11-19 00:09:10.939303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:36464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.447 [2024-11-19 00:09:10.939322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:52.447 [2024-11-19 00:09:10.939349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:36472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.447 [2024-11-19 00:09:10.939368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:52.447 [2024-11-19 00:09:10.939400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:36480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.447 [2024-11-19 00:09:10.939420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:52.447 [2024-11-19 00:09:10.939447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:36488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.448 [2024-11-19 00:09:10.939466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:52.448 [2024-11-19 00:09:10.939493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:36496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.448 [2024-11-19 00:09:10.939513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:52.448 [2024-11-19 00:09:10.939539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:36504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.448 [2024-11-19 00:09:10.939558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:52.448 [2024-11-19 00:09:10.939584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:36512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.448 [2024-11-19 00:09:10.939604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:52.448 [2024-11-19 00:09:10.939645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:36520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.448 [2024-11-19 00:09:10.939668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:52.448 [2024-11-19 00:09:10.939696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:36528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.448 [2024-11-19 00:09:10.939715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:52.448 [2024-11-19 00:09:10.939742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:36536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.448 [2024-11-19 00:09:10.939762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:52.448 [2024-11-19 00:09:10.940684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:36544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.448 [2024-11-19 00:09:10.940719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:52.448 [2024-11-19 00:09:10.940780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:36552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.448 [2024-11-19 00:09:10.940808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:52.448 [2024-11-19 00:09:10.940837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:36560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.448 [2024-11-19 00:09:10.940858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:52.448 [2024-11-19 00:09:10.940884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:36568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.448 [2024-11-19 00:09:10.940903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:52.448 [2024-11-19 00:09:10.940929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:36576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.448 [2024-11-19 00:09:10.940949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:52.448 [2024-11-19 00:09:10.940976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:36584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.448 [2024-11-19 00:09:10.940996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:52.448 [2024-11-19 00:09:10.941022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:36592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.448 [2024-11-19 00:09:10.941041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:52.448 [2024-11-19 00:09:10.941067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:36600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.448 [2024-11-19 00:09:10.941087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:52.448 [2024-11-19 00:09:10.941113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:36608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.448 [2024-11-19 00:09:10.941132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:52.448 [2024-11-19 00:09:10.941158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:36616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.448 [2024-11-19 00:09:10.941177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:52.448 [2024-11-19 00:09:10.941220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:36624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.448 [2024-11-19 00:09:10.941240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:52.448 [2024-11-19 00:09:10.941266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:36632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.448 [2024-11-19 00:09:10.941285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:52.448 [2024-11-19 00:09:10.941311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:36640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.448 [2024-11-19 00:09:10.941330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:52.448 [2024-11-19 00:09:10.941357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:36648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.448 [2024-11-19 00:09:10.941384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:52.448 [2024-11-19 00:09:10.941412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:36656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.448 [2024-11-19 00:09:10.941432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:52.448 [2024-11-19 00:09:10.941459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:36664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.448 [2024-11-19 00:09:10.941479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:52.448 [2024-11-19 00:09:10.941524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:36672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.448 [2024-11-19 00:09:10.941549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:52.448 [2024-11-19 00:09:10.941576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:36680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.448 [2024-11-19 00:09:10.941596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:52.448 [2024-11-19 00:09:10.941638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:36688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.448 [2024-11-19 00:09:10.941660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:52.448 [2024-11-19 00:09:10.941687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:36696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.448 [2024-11-19 00:09:10.941706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:52.448 [2024-11-19 00:09:10.941732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:36704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.448 [2024-11-19 00:09:10.941752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:52.448 [2024-11-19 00:09:10.941779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:36712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.448 [2024-11-19 00:09:10.941798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:52.448 [2024-11-19 00:09:10.941824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:36720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.448 [2024-11-19 00:09:10.941843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:52.448 [2024-11-19 00:09:10.941869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:36728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.448 [2024-11-19 00:09:10.941888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:52.448 [2024-11-19 00:09:10.941914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:36736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.448 [2024-11-19 00:09:10.941934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:52.448 [2024-11-19 00:09:10.941960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:36744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.448 [2024-11-19 00:09:10.941989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:52.448 [2024-11-19 00:09:10.942017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:36752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.448 [2024-11-19 00:09:10.942036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:52.448 [2024-11-19 00:09:10.942063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:36760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.448 [2024-11-19 00:09:10.942083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:52.448 [2024-11-19 00:09:10.942109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:36768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.448 [2024-11-19 00:09:10.942128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:52.448 [2024-11-19 00:09:10.942154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:36776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.448 [2024-11-19 00:09:10.942173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:52.448 [2024-11-19 00:09:10.942199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:36784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.448 [2024-11-19 00:09:10.942218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:52.448 [2024-11-19 00:09:10.942245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:36792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.448 [2024-11-19 00:09:10.942265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:52.449 [2024-11-19 00:09:10.942307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:36800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.449 [2024-11-19 00:09:10.942331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:52.449 [2024-11-19 00:09:10.942358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:36808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.449 [2024-11-19 00:09:10.942378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:52.449 [2024-11-19 00:09:10.942404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:36816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.449 [2024-11-19 00:09:10.942423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:52.449 [2024-11-19 00:09:10.942449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:36824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.449 [2024-11-19 00:09:10.942468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:52.449 [2024-11-19 00:09:10.942494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:36832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.449 [2024-11-19 00:09:10.942514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:52.449 [2024-11-19 00:09:10.942540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:36840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.449 [2024-11-19 00:09:10.942560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:52.449 [2024-11-19 00:09:10.942622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:36848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.449 [2024-11-19 00:09:10.942646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:52.449 [2024-11-19 00:09:10.942674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:36856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.449 [2024-11-19 00:09:10.942694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:52.449 [2024-11-19 00:09:10.942721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:36864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.449 [2024-11-19 00:09:10.942741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:52.449 [2024-11-19 00:09:10.942767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:36872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.449 [2024-11-19 00:09:10.942787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:52.449 [2024-11-19 00:09:10.942814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:36880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.449 [2024-11-19 00:09:10.942834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:52.449 [2024-11-19 00:09:10.942860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:36888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.449 [2024-11-19 00:09:10.942880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:52.449 [2024-11-19 00:09:10.942907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:36896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.449 [2024-11-19 00:09:10.942926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:52.449 [2024-11-19 00:09:10.942953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:36904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.449 [2024-11-19 00:09:10.942973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:52.449 [2024-11-19 00:09:10.942999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:36912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.449 [2024-11-19 00:09:10.943033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:52.449 [2024-11-19 00:09:10.943059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:36920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.449 [2024-11-19 00:09:10.943079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:52.449 [2024-11-19 00:09:10.943119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:36928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.449 [2024-11-19 00:09:10.943142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:52.449 [2024-11-19 00:09:10.943169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:36936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.449 [2024-11-19 00:09:10.943189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:52.449 [2024-11-19 00:09:10.943224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:36944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.449 [2024-11-19 00:09:10.943245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:52.449 [2024-11-19 00:09:10.943271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:36952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.449 [2024-11-19 00:09:10.943291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:52.449 [2024-11-19 00:09:10.943318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:36960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.449 [2024-11-19 00:09:10.943339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:52.449 [2024-11-19 00:09:10.943365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:36968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.449 [2024-11-19 00:09:10.943385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:52.449 [2024-11-19 00:09:10.943411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:36976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.449 [2024-11-19 00:09:10.943430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:52.449 [2024-11-19 00:09:10.943456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:36984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.449 [2024-11-19 00:09:10.943476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:52.449 [2024-11-19 00:09:10.943502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:36992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.449 [2024-11-19 00:09:10.943521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:52.449 [2024-11-19 00:09:10.943547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:37000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.449 [2024-11-19 00:09:10.943567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:52.449 [2024-11-19 00:09:10.943593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:37008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.449 [2024-11-19 00:09:10.943612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:52.449 [2024-11-19 00:09:10.943651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:37016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.449 [2024-11-19 00:09:10.943674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:52.449 [2024-11-19 00:09:10.943701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:37024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.449 [2024-11-19 00:09:10.943720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:52.449 [2024-11-19 00:09:10.943746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:37032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.449 [2024-11-19 00:09:10.943766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:52.449 [2024-11-19 00:09:10.943792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:37040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.449 [2024-11-19 00:09:10.943817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:52.449 [2024-11-19 00:09:10.943845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:37048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.449 [2024-11-19 00:09:10.943865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:52.449 [2024-11-19 00:09:10.943915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.449 [2024-11-19 00:09:10.943939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:52.449 [2024-11-19 00:09:10.943966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:37064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.449 [2024-11-19 00:09:10.943985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:52.449 [2024-11-19 00:09:10.944012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:37072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.449 [2024-11-19 00:09:10.944031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:52.449 [2024-11-19 00:09:10.944057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:37080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.449 [2024-11-19 00:09:10.944076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:52.449 [2024-11-19 00:09:10.944102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:37088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.449 [2024-11-19 00:09:10.944124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:52.449 [2024-11-19 00:09:10.944151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:37096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.449 [2024-11-19 00:09:10.944170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:52.449 [2024-11-19 00:09:10.944196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:37104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.450 [2024-11-19 00:09:10.944215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:52.450 [2024-11-19 00:09:10.944241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:37112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.450 [2024-11-19 00:09:10.944260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:52.450 [2024-11-19 00:09:10.944285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:37120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.450 [2024-11-19 00:09:10.944350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:52.450 [2024-11-19 00:09:10.944378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:37128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.450 [2024-11-19 00:09:10.944399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.450 [2024-11-19 00:09:10.944445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:37136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.450 [2024-11-19 00:09:10.944475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.450 [2024-11-19 00:09:10.944505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:37144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.450 [2024-11-19 00:09:10.944526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:52.450 [2024-11-19 00:09:10.944554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:37152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.450 [2024-11-19 00:09:10.944575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:52.450 [2024-11-19 00:09:10.944603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:37160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.450 [2024-11-19 00:09:10.944638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:52.450 [2024-11-19 00:09:10.944696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:37168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.450 [2024-11-19 00:09:10.944717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:52.450 [2024-11-19 00:09:10.944744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:36200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.450 [2024-11-19 00:09:10.944763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:52.450 [2024-11-19 00:09:10.944789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:36208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.450 [2024-11-19 00:09:10.944810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:52.450 [2024-11-19 00:09:10.944836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:36216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.450 [2024-11-19 00:09:10.944855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:52.450 [2024-11-19 00:09:10.944881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:36224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.450 [2024-11-19 00:09:10.944901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:52.450 [2024-11-19 00:09:10.944927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:36232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.450 [2024-11-19 00:09:10.944946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:52.450 [2024-11-19 00:09:10.944972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:36240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.450 [2024-11-19 00:09:10.944998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:52.450 [2024-11-19 00:09:10.945026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:36248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.450 [2024-11-19 00:09:10.945046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:52.450 [2024-11-19 00:09:10.945072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:37176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.450 [2024-11-19 00:09:10.945099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:52.450 [2024-11-19 00:09:10.945127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:36256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.450 [2024-11-19 00:09:10.945147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:52.450 [2024-11-19 00:09:10.945173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:36264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.450 [2024-11-19 00:09:10.945192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:52.450 [2024-11-19 00:09:10.945218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:36272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.450 [2024-11-19 00:09:10.945237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:52.450 [2024-11-19 00:09:10.945264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:36280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.450 [2024-11-19 00:09:10.945283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:52.450 [2024-11-19 00:09:10.945309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:36288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.450 [2024-11-19 00:09:10.945328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:52.450 [2024-11-19 00:09:10.945354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:36296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.450 [2024-11-19 00:09:10.945373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:52.450 [2024-11-19 00:09:10.945399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:36304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.450 [2024-11-19 00:09:10.945419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:52.450 [2024-11-19 00:09:10.945445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:36312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.450 [2024-11-19 00:09:10.945464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:52.450 6635.89 IOPS, 25.92 MiB/s [2024-11-19T00:09:59.142Z] 6783.50 IOPS, 26.50 MiB/s [2024-11-19T00:09:59.142Z] 6894.82 IOPS, 26.93 MiB/s [2024-11-19T00:09:59.142Z] 6993.58 IOPS, 27.32 MiB/s [2024-11-19T00:09:59.142Z] 7079.00 IOPS, 27.65 MiB/s [2024-11-19T00:09:59.142Z] 7157.36 IOPS, 27.96 MiB/s [2024-11-19T00:09:59.142Z] [2024-11-19 00:09:17.456194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:61464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.450 [2024-11-19 00:09:17.456274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:52.450 [2024-11-19 00:09:17.456360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:61472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.450 [2024-11-19 00:09:17.456389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:52.450 [2024-11-19 00:09:17.456418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:61480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.450 [2024-11-19 00:09:17.456438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:52.450 [2024-11-19 00:09:17.456465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:61488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.450 [2024-11-19 00:09:17.456506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:52.450 [2024-11-19 00:09:17.456536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:61496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.450 [2024-11-19 00:09:17.456555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:52.450 [2024-11-19 00:09:17.456582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:61504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.450 [2024-11-19 00:09:17.456601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:52.450 [2024-11-19 00:09:17.456660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:61512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.450 [2024-11-19 00:09:17.456698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:52.450 [2024-11-19 00:09:17.456725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:61520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.450 [2024-11-19 00:09:17.456744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:52.450 [2024-11-19 00:09:17.456775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:61528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.450 [2024-11-19 00:09:17.456794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:52.450 [2024-11-19 00:09:17.456820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:61536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.450 [2024-11-19 00:09:17.456838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:52.450 [2024-11-19 00:09:17.456862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:61544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.450 [2024-11-19 00:09:17.456880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:52.450 [2024-11-19 00:09:17.456905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:61552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.450 [2024-11-19 00:09:17.456923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:52.450 [2024-11-19 00:09:17.456948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:61016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.451 [2024-11-19 00:09:17.456966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:52.451 [2024-11-19 00:09:17.456991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:61024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.451 [2024-11-19 00:09:17.457009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:52.451 [2024-11-19 00:09:17.457034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:61032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.451 [2024-11-19 00:09:17.457052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:52.451 [2024-11-19 00:09:17.457077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:61040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.451 [2024-11-19 00:09:17.457105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:52.451 [2024-11-19 00:09:17.457133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:61048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.451 [2024-11-19 00:09:17.457152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:52.451 [2024-11-19 00:09:17.457180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:61056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.451 [2024-11-19 00:09:17.457199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:52.451 [2024-11-19 00:09:17.457224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:61064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.451 [2024-11-19 00:09:17.457243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:52.451 [2024-11-19 00:09:17.457268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:61072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.451 [2024-11-19 00:09:17.457287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:52.451 [2024-11-19 00:09:17.457312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:61560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.451 [2024-11-19 00:09:17.457331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:52.451 [2024-11-19 00:09:17.457355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:61568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.451 [2024-11-19 00:09:17.457374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:52.451 [2024-11-19 00:09:17.457399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:61576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.451 [2024-11-19 00:09:17.457418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:52.451 [2024-11-19 00:09:17.457443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:61584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.451 [2024-11-19 00:09:17.457462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:52.451 [2024-11-19 00:09:17.457507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:61592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.451 [2024-11-19 00:09:17.457531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:52.451 [2024-11-19 00:09:17.457558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:61600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.451 [2024-11-19 00:09:17.457577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:52.451 [2024-11-19 00:09:17.457602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:61608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.451 [2024-11-19 00:09:17.457651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:52.451 [2024-11-19 00:09:17.457680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:61616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.451 [2024-11-19 00:09:17.457699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:52.451 [2024-11-19 00:09:17.457735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:61624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.451 [2024-11-19 00:09:17.457755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:52.451 [2024-11-19 00:09:17.457781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:61632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.451 [2024-11-19 00:09:17.457800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:52.451 [2024-11-19 00:09:17.457826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:61640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.451 [2024-11-19 00:09:17.457845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:52.451 [2024-11-19 00:09:17.457871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:61648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.451 [2024-11-19 00:09:17.457889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:52.451 [2024-11-19 00:09:17.457916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:61656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.451 [2024-11-19 00:09:17.457934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:52.451 [2024-11-19 00:09:17.457961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:61664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.451 [2024-11-19 00:09:17.457980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:52.451 [2024-11-19 00:09:17.458020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:61672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.451 [2024-11-19 00:09:17.458039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:52.451 [2024-11-19 00:09:17.458064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:61680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.451 [2024-11-19 00:09:17.458083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:52.451 [2024-11-19 00:09:17.458108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:61688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.451 [2024-11-19 00:09:17.458126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:52.451 [2024-11-19 00:09:17.458151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:61696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.451 [2024-11-19 00:09:17.458170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:52.451 [2024-11-19 00:09:17.458195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:61080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.451 [2024-11-19 00:09:17.458214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:52.451 [2024-11-19 00:09:17.458238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:61088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.451 [2024-11-19 00:09:17.458257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:52.451 [2024-11-19 00:09:17.458290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:61096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.451 [2024-11-19 00:09:17.458309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:52.451 [2024-11-19 00:09:17.458334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:61104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.451 [2024-11-19 00:09:17.458353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:52.451 [2024-11-19 00:09:17.458378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:61112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.451 [2024-11-19 00:09:17.458396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:52.451 [2024-11-19 00:09:17.458457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:61120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.452 [2024-11-19 00:09:17.458477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:52.452 [2024-11-19 00:09:17.458503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:61128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.452 [2024-11-19 00:09:17.458522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:52.452 [2024-11-19 00:09:17.458548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:61136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.452 [2024-11-19 00:09:17.458567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:52.452 [2024-11-19 00:09:17.458593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:61144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.452 [2024-11-19 00:09:17.458612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:52.452 [2024-11-19 00:09:17.458638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:61152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.452 [2024-11-19 00:09:17.458671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:52.452 [2024-11-19 00:09:17.458699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:61160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.452 [2024-11-19 00:09:17.458719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:52.452 [2024-11-19 00:09:17.458746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:61168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.452 [2024-11-19 00:09:17.458765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:52.452 [2024-11-19 00:09:17.458791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:61176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.452 [2024-11-19 00:09:17.458810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:52.452 [2024-11-19 00:09:17.458836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:61184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.452 [2024-11-19 00:09:17.458855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:52.452 [2024-11-19 00:09:17.458881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:61192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.452 [2024-11-19 00:09:17.458909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:52.452 [2024-11-19 00:09:17.458936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:61200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.452 [2024-11-19 00:09:17.458955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:52.452 [2024-11-19 00:09:17.458982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:61704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.452 [2024-11-19 00:09:17.459001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:52.452 [2024-11-19 00:09:17.459028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:61712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.452 [2024-11-19 00:09:17.459047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:52.452 [2024-11-19 00:09:17.459093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:61720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.452 [2024-11-19 00:09:17.459117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:52.452 [2024-11-19 00:09:17.459145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:61728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.452 [2024-11-19 00:09:17.459164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:52.452 [2024-11-19 00:09:17.459190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:61736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.452 [2024-11-19 00:09:17.459210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:52.452 [2024-11-19 00:09:17.459236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:61744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.452 [2024-11-19 00:09:17.459255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:52.452 [2024-11-19 00:09:17.459281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:61752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.452 [2024-11-19 00:09:17.459300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:52.452 [2024-11-19 00:09:17.459326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:61760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.452 [2024-11-19 00:09:17.459345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:52.452 [2024-11-19 00:09:17.459370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:61768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.452 [2024-11-19 00:09:17.459389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:52.452 [2024-11-19 00:09:17.459415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:61776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.452 [2024-11-19 00:09:17.459434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:52.452 [2024-11-19 00:09:17.459460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:61784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.452 [2024-11-19 00:09:17.459497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:52.452 [2024-11-19 00:09:17.459526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:61792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.452 [2024-11-19 00:09:17.459546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:52.452 [2024-11-19 00:09:17.459572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:61800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.452 [2024-11-19 00:09:17.459592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:52.452 [2024-11-19 00:09:17.459649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:61808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.452 [2024-11-19 00:09:17.459669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:52.452 [2024-11-19 00:09:17.459696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:61816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.452 [2024-11-19 00:09:17.459715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.452 [2024-11-19 00:09:17.459742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:61208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.452 [2024-11-19 00:09:17.459762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.452 [2024-11-19 00:09:17.459788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:61216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.452 [2024-11-19 00:09:17.459808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:52.452 [2024-11-19 00:09:17.459834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:61224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.452 [2024-11-19 00:09:17.459854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:52.452 [2024-11-19 00:09:17.459881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:61232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.452 [2024-11-19 00:09:17.459900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:52.452 [2024-11-19 00:09:17.459927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:61240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.452 [2024-11-19 00:09:17.459946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:52.452 [2024-11-19 00:09:17.459973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:61248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.452 [2024-11-19 00:09:17.459993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:52.452 [2024-11-19 00:09:17.460035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:61256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.452 [2024-11-19 00:09:17.460055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:52.452 [2024-11-19 00:09:17.460081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:61264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.452 [2024-11-19 00:09:17.460100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:52.452 [2024-11-19 00:09:17.460133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.452 [2024-11-19 00:09:17.460153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:52.452 [2024-11-19 00:09:17.460179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:61832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.452 [2024-11-19 00:09:17.460198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:52.452 [2024-11-19 00:09:17.460224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:61840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.452 [2024-11-19 00:09:17.460243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:52.452 [2024-11-19 00:09:17.460269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:61848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.452 [2024-11-19 00:09:17.460289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:52.452 [2024-11-19 00:09:17.460367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:61856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.452 [2024-11-19 00:09:17.460388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:52.452 [2024-11-19 00:09:17.460417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:61864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.453 [2024-11-19 00:09:17.460444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:52.453 [2024-11-19 00:09:17.460472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:61872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.453 [2024-11-19 00:09:17.460492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:52.453 [2024-11-19 00:09:17.460519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:61880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.453 [2024-11-19 00:09:17.460539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:52.453 [2024-11-19 00:09:17.460567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:61888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.453 [2024-11-19 00:09:17.460586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:52.453 [2024-11-19 00:09:17.460614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:61896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.453 [2024-11-19 00:09:17.460678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:52.453 [2024-11-19 00:09:17.460708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:61904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.453 [2024-11-19 00:09:17.460728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:52.453 [2024-11-19 00:09:17.460759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:61912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.453 [2024-11-19 00:09:17.460779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:52.453 [2024-11-19 00:09:17.460814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:61920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.453 [2024-11-19 00:09:17.460834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:52.453 [2024-11-19 00:09:17.460860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:61928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.453 [2024-11-19 00:09:17.460879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:52.453 [2024-11-19 00:09:17.460904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:61936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.453 [2024-11-19 00:09:17.460923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:52.453 [2024-11-19 00:09:17.460949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:61944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.453 [2024-11-19 00:09:17.460968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:52.453 [2024-11-19 00:09:17.460994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:61952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.453 [2024-11-19 00:09:17.461013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:52.453 [2024-11-19 00:09:17.461038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:61960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.453 [2024-11-19 00:09:17.461057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:52.453 [2024-11-19 00:09:17.461083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:61968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.453 [2024-11-19 00:09:17.461102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:52.453 [2024-11-19 00:09:17.461128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:61272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.453 [2024-11-19 00:09:17.461147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:52.453 [2024-11-19 00:09:17.461175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:61280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.453 [2024-11-19 00:09:17.461194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:52.453 [2024-11-19 00:09:17.461220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:61288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.453 [2024-11-19 00:09:17.461239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:52.453 [2024-11-19 00:09:17.461265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:61296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.453 [2024-11-19 00:09:17.461284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:52.453 [2024-11-19 00:09:17.461310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:61304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.453 [2024-11-19 00:09:17.461329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:52.453 [2024-11-19 00:09:17.461361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:61312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.453 [2024-11-19 00:09:17.461381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:52.453 [2024-11-19 00:09:17.461407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:61320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.453 [2024-11-19 00:09:17.461427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:52.453 [2024-11-19 00:09:17.461453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:61328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.453 [2024-11-19 00:09:17.461472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:52.453 [2024-11-19 00:09:17.461502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:61336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.453 [2024-11-19 00:09:17.461522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:52.453 [2024-11-19 00:09:17.461547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:61344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.453 [2024-11-19 00:09:17.461566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:52.453 [2024-11-19 00:09:17.461592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:61352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.453 [2024-11-19 00:09:17.461621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:52.453 [2024-11-19 00:09:17.461667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:61360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.453 [2024-11-19 00:09:17.461687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:52.453 [2024-11-19 00:09:17.461713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:61368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.453 [2024-11-19 00:09:17.461732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:52.453 [2024-11-19 00:09:17.461758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:61376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.453 [2024-11-19 00:09:17.461777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:52.453 [2024-11-19 00:09:17.461803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:61384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.453 [2024-11-19 00:09:17.461822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:52.453 [2024-11-19 00:09:17.463198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:61392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.453 [2024-11-19 00:09:17.463234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:52.453 [2024-11-19 00:09:17.463270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:61976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.453 [2024-11-19 00:09:17.463291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:52.453 [2024-11-19 00:09:17.463322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:61984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.453 [2024-11-19 00:09:17.463353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:52.453 [2024-11-19 00:09:17.463382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:61992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.453 [2024-11-19 00:09:17.463401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:52.453 [2024-11-19 00:09:17.463427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:62000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.453 [2024-11-19 00:09:17.463446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:52.453 [2024-11-19 00:09:17.463472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:62008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.453 [2024-11-19 00:09:17.463492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:52.453 [2024-11-19 00:09:17.463517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:62016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.453 [2024-11-19 00:09:17.463535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:52.453 [2024-11-19 00:09:17.463561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:62024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.453 [2024-11-19 00:09:17.463581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:52.453 [2024-11-19 00:09:17.463789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:62032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.453 [2024-11-19 00:09:17.463820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:52.453 [2024-11-19 00:09:17.463855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:61400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.453 [2024-11-19 00:09:17.463877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:52.454 [2024-11-19 00:09:17.463905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:61408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.454 [2024-11-19 00:09:17.463925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:52.454 [2024-11-19 00:09:17.463951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:61416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.454 [2024-11-19 00:09:17.463971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:52.454 [2024-11-19 00:09:17.463997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:61424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.454 [2024-11-19 00:09:17.464031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:52.454 [2024-11-19 00:09:17.464057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:61432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.454 [2024-11-19 00:09:17.464075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:52.454 [2024-11-19 00:09:17.464101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:61440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.454 [2024-11-19 00:09:17.464130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:52.454 [2024-11-19 00:09:17.464158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:61448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.454 [2024-11-19 00:09:17.464177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:52.454 [2024-11-19 00:09:17.464202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:61456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.454 [2024-11-19 00:09:17.464221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:52.454 [2024-11-19 00:09:17.464247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:61464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.454 [2024-11-19 00:09:17.464266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:52.454 [2024-11-19 00:09:17.464302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:61472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.454 [2024-11-19 00:09:17.464342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:52.454 [2024-11-19 00:09:17.464370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:61480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.454 [2024-11-19 00:09:17.464390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:52.454 [2024-11-19 00:09:17.464418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:61488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.454 [2024-11-19 00:09:17.464439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:52.454 [2024-11-19 00:09:17.464467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:61496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.454 [2024-11-19 00:09:17.464487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:52.454 [2024-11-19 00:09:17.464515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:61504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.454 [2024-11-19 00:09:17.464535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:52.454 [2024-11-19 00:09:17.464564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:61512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.454 [2024-11-19 00:09:17.464585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:52.454 [2024-11-19 00:09:17.464687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:61520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.454 [2024-11-19 00:09:17.464712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:52.454 [2024-11-19 00:09:17.464740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:61528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.454 [2024-11-19 00:09:17.464759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:52.454 [2024-11-19 00:09:17.464786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:61536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.454 [2024-11-19 00:09:17.464804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:52.454 [2024-11-19 00:09:17.464840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:61544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.454 [2024-11-19 00:09:17.464860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:52.454 [2024-11-19 00:09:17.464886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:61552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.454 [2024-11-19 00:09:17.464905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:52.454 [2024-11-19 00:09:17.464931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:61016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.454 [2024-11-19 00:09:17.464950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:52.454 [2024-11-19 00:09:17.464976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:61024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.454 [2024-11-19 00:09:17.464995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:52.454 [2024-11-19 00:09:17.465021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:61032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.454 [2024-11-19 00:09:17.465040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:52.454 [2024-11-19 00:09:17.465066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:61040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.454 [2024-11-19 00:09:17.465085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:52.454 [2024-11-19 00:09:17.465111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.454 [2024-11-19 00:09:17.465130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:52.454 [2024-11-19 00:09:17.465156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:61056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.454 [2024-11-19 00:09:17.465175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:52.454 [2024-11-19 00:09:17.465201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:61064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.454 [2024-11-19 00:09:17.465220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:52.454 [2024-11-19 00:09:17.465246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:61072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.454 [2024-11-19 00:09:17.465265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:52.454 [2024-11-19 00:09:17.465290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:61560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.454 [2024-11-19 00:09:17.465309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:52.454 [2024-11-19 00:09:17.465335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:61568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.454 [2024-11-19 00:09:17.465354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:52.454 [2024-11-19 00:09:17.465387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:61576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.454 [2024-11-19 00:09:17.465407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:52.454 [2024-11-19 00:09:17.465807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:61584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.454 [2024-11-19 00:09:17.465838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:52.454 [2024-11-19 00:09:17.465872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:61592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.454 [2024-11-19 00:09:17.465893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:52.454 [2024-11-19 00:09:17.465920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:61600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.454 [2024-11-19 00:09:17.465939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:52.454 [2024-11-19 00:09:17.465964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:61608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.454 [2024-11-19 00:09:17.465984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:52.454 [2024-11-19 00:09:17.466010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:61616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.454 [2024-11-19 00:09:17.466029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:52.454 [2024-11-19 00:09:17.466055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:61624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.454 [2024-11-19 00:09:17.466074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:52.455 [2024-11-19 00:09:17.466100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:61632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.455 [2024-11-19 00:09:17.466119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:52.455 [2024-11-19 00:09:17.466144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:61640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.455 [2024-11-19 00:09:17.466163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:52.455 [2024-11-19 00:09:17.466189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:61648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.455 [2024-11-19 00:09:17.466207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:52.455 [2024-11-19 00:09:17.466233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:61656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.455 [2024-11-19 00:09:17.466252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:52.455 [2024-11-19 00:09:17.466278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:61664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.455 [2024-11-19 00:09:17.466297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:52.455 [2024-11-19 00:09:17.466322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:61672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.455 [2024-11-19 00:09:17.466354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:52.455 [2024-11-19 00:09:17.466382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:61680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.455 [2024-11-19 00:09:17.466401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:52.455 [2024-11-19 00:09:17.466427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:61688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.455 [2024-11-19 00:09:17.466446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:52.455 [2024-11-19 00:09:17.466472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:61696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.455 [2024-11-19 00:09:17.466491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:52.455 [2024-11-19 00:09:17.466517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:61080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.455 [2024-11-19 00:09:17.466536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:52.455 [2024-11-19 00:09:17.466561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:61088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.455 [2024-11-19 00:09:17.466580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:52.455 [2024-11-19 00:09:17.466620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:61096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.455 [2024-11-19 00:09:17.466643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:52.455 [2024-11-19 00:09:17.466668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:61104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.455 [2024-11-19 00:09:17.466687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:52.455 [2024-11-19 00:09:17.466713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:61112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.455 [2024-11-19 00:09:17.466732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:52.455 [2024-11-19 00:09:17.466774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:61120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.455 [2024-11-19 00:09:17.466794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:52.455 [2024-11-19 00:09:17.466820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:61128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.455 [2024-11-19 00:09:17.466838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:52.455 [2024-11-19 00:09:17.466864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:61136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.455 [2024-11-19 00:09:17.466883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:52.455 [2024-11-19 00:09:17.466909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:61144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.455 [2024-11-19 00:09:17.466936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:52.455 [2024-11-19 00:09:17.466963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:61152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.455 [2024-11-19 00:09:17.466982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:52.455 [2024-11-19 00:09:17.467012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:61160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.455 [2024-11-19 00:09:17.467032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:52.455 [2024-11-19 00:09:17.467057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:61168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.455 [2024-11-19 00:09:17.467076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:52.455 [2024-11-19 00:09:17.467102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:61176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.455 [2024-11-19 00:09:17.467121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:52.455 [2024-11-19 00:09:17.467146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:61184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.455 [2024-11-19 00:09:17.467165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:52.455 [2024-11-19 00:09:17.467190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:61192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.455 [2024-11-19 00:09:17.467209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:52.455 [2024-11-19 00:09:17.467234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:61200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.455 [2024-11-19 00:09:17.467253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:52.455 [2024-11-19 00:09:17.467279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:61704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.455 [2024-11-19 00:09:17.467298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:52.455 [2024-11-19 00:09:17.467342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:61712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.455 [2024-11-19 00:09:17.467365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:52.455 [2024-11-19 00:09:17.467392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:61720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.455 [2024-11-19 00:09:17.467411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:52.455 [2024-11-19 00:09:17.467437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:61728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.455 [2024-11-19 00:09:17.467455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:52.455 [2024-11-19 00:09:17.467480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:61736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.455 [2024-11-19 00:09:17.467499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:52.455 [2024-11-19 00:09:17.467534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:61744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.455 [2024-11-19 00:09:17.467554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:52.455 [2024-11-19 00:09:17.467579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:61752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.455 [2024-11-19 00:09:17.467628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:52.455 [2024-11-19 00:09:17.467658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:61760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.455 [2024-11-19 00:09:17.467679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:52.455 [2024-11-19 00:09:17.467705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:61768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.455 [2024-11-19 00:09:17.467724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:52.455 [2024-11-19 00:09:17.467750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:61776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.455 [2024-11-19 00:09:17.467770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:52.456 [2024-11-19 00:09:17.467799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:61784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.456 [2024-11-19 00:09:17.467819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:52.456 [2024-11-19 00:09:17.467847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:61792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.456 [2024-11-19 00:09:17.467867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:52.456 [2024-11-19 00:09:17.467894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:61800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.456 [2024-11-19 00:09:17.467914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:52.456 [2024-11-19 00:09:17.467940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:61808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.456 [2024-11-19 00:09:17.467959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:52.456 [2024-11-19 00:09:17.467985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:61816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.456 [2024-11-19 00:09:17.468019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.456 [2024-11-19 00:09:17.468045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:61208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.456 [2024-11-19 00:09:17.468064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.456 [2024-11-19 00:09:17.468090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:61216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.456 [2024-11-19 00:09:17.468109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:52.456 [2024-11-19 00:09:17.468143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:61224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.456 [2024-11-19 00:09:17.468164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:52.456 [2024-11-19 00:09:17.468190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:61232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.456 [2024-11-19 00:09:17.468209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:52.456 [2024-11-19 00:09:17.468235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:61240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.456 [2024-11-19 00:09:17.468254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:52.456 [2024-11-19 00:09:17.468280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:61248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.456 [2024-11-19 00:09:17.468342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:52.456 [2024-11-19 00:09:17.468372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:61256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.456 [2024-11-19 00:09:17.468402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:52.456 [2024-11-19 00:09:17.468430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:61264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.456 [2024-11-19 00:09:17.468450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:52.456 [2024-11-19 00:09:17.468477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:61824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.456 [2024-11-19 00:09:17.468497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:52.456 [2024-11-19 00:09:17.468524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:61832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.456 [2024-11-19 00:09:17.468544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:52.456 [2024-11-19 00:09:17.468572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:61840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.456 [2024-11-19 00:09:17.468592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:52.456 [2024-11-19 00:09:17.468622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:61848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.456 [2024-11-19 00:09:17.468672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:52.456 [2024-11-19 00:09:17.468717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:61856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.456 [2024-11-19 00:09:17.468738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:52.456 [2024-11-19 00:09:17.468764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:61864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.456 [2024-11-19 00:09:17.468784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:52.456 [2024-11-19 00:09:17.468809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:61872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.456 [2024-11-19 00:09:17.468837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:52.456 [2024-11-19 00:09:17.468864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:61880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.456 [2024-11-19 00:09:17.468884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:52.456 [2024-11-19 00:09:17.468910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:61888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.456 [2024-11-19 00:09:17.468929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:52.456 [2024-11-19 00:09:17.468956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:61896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.456 [2024-11-19 00:09:17.468975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:52.456 [2024-11-19 00:09:17.469027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:61904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.456 [2024-11-19 00:09:17.469051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:52.456 [2024-11-19 00:09:17.469078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:61912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.456 [2024-11-19 00:09:17.469098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:52.456 [2024-11-19 00:09:17.469123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:61920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.456 [2024-11-19 00:09:17.469143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:52.456 [2024-11-19 00:09:17.469169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:61928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.456 [2024-11-19 00:09:17.469188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:52.456 [2024-11-19 00:09:17.469213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:61936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.456 [2024-11-19 00:09:17.469232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:52.456 [2024-11-19 00:09:17.469257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:61944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.456 [2024-11-19 00:09:17.469277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:52.456 [2024-11-19 00:09:17.469303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:61952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.456 [2024-11-19 00:09:17.469322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:52.456 [2024-11-19 00:09:17.469348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:61960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.456 [2024-11-19 00:09:17.469367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:52.456 [2024-11-19 00:09:17.469393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:61968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.456 [2024-11-19 00:09:17.469422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:52.456 [2024-11-19 00:09:17.469451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:61272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.456 [2024-11-19 00:09:17.469471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:52.456 [2024-11-19 00:09:17.469500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:61280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.456 [2024-11-19 00:09:17.469520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:52.456 [2024-11-19 00:09:17.469546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:61288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.456 [2024-11-19 00:09:17.469566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:52.456 [2024-11-19 00:09:17.469591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:61296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.456 [2024-11-19 00:09:17.469623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:52.456 [2024-11-19 00:09:17.469653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:61304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.456 [2024-11-19 00:09:17.469673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:52.456 [2024-11-19 00:09:17.469699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:61312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.457 [2024-11-19 00:09:17.469719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:52.457 [2024-11-19 00:09:17.469744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:61320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.457 [2024-11-19 00:09:17.469763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:52.457 [2024-11-19 00:09:17.469790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:61328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.457 [2024-11-19 00:09:17.469809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:52.457 [2024-11-19 00:09:17.469835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:61336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.457 [2024-11-19 00:09:17.469854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:52.457 [2024-11-19 00:09:17.469880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:61344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.457 [2024-11-19 00:09:17.469899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:52.457 [2024-11-19 00:09:17.469925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:61352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.457 [2024-11-19 00:09:17.469944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:52.457 [2024-11-19 00:09:17.469986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:61360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.457 [2024-11-19 00:09:17.470006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:52.457 [2024-11-19 00:09:17.470042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:61368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.457 [2024-11-19 00:09:17.470062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:52.457 [2024-11-19 00:09:17.470089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:61376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.457 [2024-11-19 00:09:17.470108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:52.457 [2024-11-19 00:09:17.470134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:61384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.457 [2024-11-19 00:09:17.470153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:52.457 [2024-11-19 00:09:17.470179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:61392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.457 [2024-11-19 00:09:17.470198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:52.457 [2024-11-19 00:09:17.470224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:61976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.457 [2024-11-19 00:09:17.470243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:52.457 [2024-11-19 00:09:17.470272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:61984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.457 [2024-11-19 00:09:17.470292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:52.457 [2024-11-19 00:09:17.470318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:61992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.457 [2024-11-19 00:09:17.470337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:52.457 [2024-11-19 00:09:17.470363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:62000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.457 [2024-11-19 00:09:17.470411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:52.457 [2024-11-19 00:09:17.470439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:62008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.457 [2024-11-19 00:09:17.470459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:52.457 [2024-11-19 00:09:17.470485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:62016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.457 [2024-11-19 00:09:17.470504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:52.457 [2024-11-19 00:09:17.470531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:62024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.457 [2024-11-19 00:09:17.470550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:52.457 [2024-11-19 00:09:17.470576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:62032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.457 [2024-11-19 00:09:17.470607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:52.457 [2024-11-19 00:09:17.470645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:61400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.457 [2024-11-19 00:09:17.470666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:52.457 [2024-11-19 00:09:17.470693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:61408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.457 [2024-11-19 00:09:17.470712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:52.457 [2024-11-19 00:09:17.470738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:61416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.457 [2024-11-19 00:09:17.470757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:52.457 [2024-11-19 00:09:17.470783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:61424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.457 [2024-11-19 00:09:17.470804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:52.457 [2024-11-19 00:09:17.470830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.457 [2024-11-19 00:09:17.470850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:52.457 [2024-11-19 00:09:17.470876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:61440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.457 [2024-11-19 00:09:17.470895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:52.457 [2024-11-19 00:09:17.470921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:61448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.457 [2024-11-19 00:09:17.470941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:52.457 [2024-11-19 00:09:17.470966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:61456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.457 [2024-11-19 00:09:17.470986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:52.457 [2024-11-19 00:09:17.471012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:61464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.457 [2024-11-19 00:09:17.471031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:52.457 [2024-11-19 00:09:17.471058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:61472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.457 [2024-11-19 00:09:17.471078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:52.457 [2024-11-19 00:09:17.471104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:61480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.457 [2024-11-19 00:09:17.471123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:52.457 [2024-11-19 00:09:17.471149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:61488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.457 [2024-11-19 00:09:17.471169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:52.457 [2024-11-19 00:09:17.471195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:61496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.457 [2024-11-19 00:09:17.471221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:52.457 [2024-11-19 00:09:17.471248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:61504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.457 [2024-11-19 00:09:17.471267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:52.457 [2024-11-19 00:09:17.471294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:61512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.457 [2024-11-19 00:09:17.471314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:52.457 [2024-11-19 00:09:17.471345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:61520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.457 [2024-11-19 00:09:17.471366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:52.457 [2024-11-19 00:09:17.471392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:61528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.457 [2024-11-19 00:09:17.471412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:52.457 [2024-11-19 00:09:17.471437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:61536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.457 [2024-11-19 00:09:17.471457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:52.457 [2024-11-19 00:09:17.471482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:61544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.457 [2024-11-19 00:09:17.471518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:52.457 [2024-11-19 00:09:17.471545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:61552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.457 [2024-11-19 00:09:17.471565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:52.458 [2024-11-19 00:09:17.471596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:61016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.458 [2024-11-19 00:09:17.471644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:52.458 [2024-11-19 00:09:17.471675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:61024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.458 [2024-11-19 00:09:17.471696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:52.458 [2024-11-19 00:09:17.471723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:61032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.458 [2024-11-19 00:09:17.471744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:52.458 [2024-11-19 00:09:17.471771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:61040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.458 [2024-11-19 00:09:17.471791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:52.458 [2024-11-19 00:09:17.471818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:61048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.458 [2024-11-19 00:09:17.471847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:52.458 [2024-11-19 00:09:17.471876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:61056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.458 [2024-11-19 00:09:17.471897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:52.458 [2024-11-19 00:09:17.471939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:61064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.458 [2024-11-19 00:09:17.471959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:52.458 [2024-11-19 00:09:17.471986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:61072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.458 [2024-11-19 00:09:17.472006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:52.458 [2024-11-19 00:09:17.472047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:61560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.458 [2024-11-19 00:09:17.472066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:52.458 [2024-11-19 00:09:17.472092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:61568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.458 [2024-11-19 00:09:17.472127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:52.458 [2024-11-19 00:09:17.474938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:61576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.458 [2024-11-19 00:09:17.474975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:52.458 [2024-11-19 00:09:17.475011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:61584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.458 [2024-11-19 00:09:17.475033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:52.458 [2024-11-19 00:09:17.475059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:61592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.458 [2024-11-19 00:09:17.475080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:52.458 [2024-11-19 00:09:17.475107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:61600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.458 [2024-11-19 00:09:17.475126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:52.458 [2024-11-19 00:09:17.475152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:61608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.458 [2024-11-19 00:09:17.475172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:52.458 [2024-11-19 00:09:17.475198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:61616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.458 [2024-11-19 00:09:17.475218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:52.458 [2024-11-19 00:09:17.475245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:61624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.458 [2024-11-19 00:09:17.475265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:52.458 [2024-11-19 00:09:17.475303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:61632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.458 [2024-11-19 00:09:17.475324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:52.458 [2024-11-19 00:09:17.475350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:61640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.458 [2024-11-19 00:09:17.475370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:52.458 [2024-11-19 00:09:17.475396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:61648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.458 [2024-11-19 00:09:17.475415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:52.458 [2024-11-19 00:09:17.475442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:61656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.458 [2024-11-19 00:09:17.475461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:52.458 [2024-11-19 00:09:17.475487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:61664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.458 [2024-11-19 00:09:17.475507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:52.458 [2024-11-19 00:09:17.475532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:61672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.458 [2024-11-19 00:09:17.475552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:52.458 [2024-11-19 00:09:17.475578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:61680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.458 [2024-11-19 00:09:17.475597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:52.458 [2024-11-19 00:09:17.475657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:61688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.458 [2024-11-19 00:09:17.475679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:52.458 [2024-11-19 00:09:17.475705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:61696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.458 [2024-11-19 00:09:17.475725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:52.458 [2024-11-19 00:09:17.475753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:61080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.458 [2024-11-19 00:09:17.475772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:52.458 [2024-11-19 00:09:17.475799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:61088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.458 [2024-11-19 00:09:17.475819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:52.458 [2024-11-19 00:09:17.475846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:61096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.458 [2024-11-19 00:09:17.475865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:52.458 [2024-11-19 00:09:17.475902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:61104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.458 [2024-11-19 00:09:17.475938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:52.458 [2024-11-19 00:09:17.475964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:61112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.458 [2024-11-19 00:09:17.475984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:52.458 [2024-11-19 00:09:17.476027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:61120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.458 [2024-11-19 00:09:17.476047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:52.458 [2024-11-19 00:09:17.476074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:61128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.458 [2024-11-19 00:09:17.476093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:52.458 [2024-11-19 00:09:17.476119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:61136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.458 [2024-11-19 00:09:17.476139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:52.458 [2024-11-19 00:09:17.476165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:61144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.458 [2024-11-19 00:09:17.476184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:52.458 [2024-11-19 00:09:17.476211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.458 [2024-11-19 00:09:17.476230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:52.458 [2024-11-19 00:09:17.476256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:61160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.458 [2024-11-19 00:09:17.476275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:52.458 [2024-11-19 00:09:17.476346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:61168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.458 [2024-11-19 00:09:17.476369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:52.458 [2024-11-19 00:09:17.476397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:61176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.458 [2024-11-19 00:09:17.476418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:52.459 [2024-11-19 00:09:17.476447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:61184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.459 [2024-11-19 00:09:17.476467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:52.459 [2024-11-19 00:09:17.476496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:61192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.459 [2024-11-19 00:09:17.476516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:52.459 [2024-11-19 00:09:17.476545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:61200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.459 [2024-11-19 00:09:17.476586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:52.459 [2024-11-19 00:09:17.476616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:61704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.459 [2024-11-19 00:09:17.476682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:52.459 [2024-11-19 00:09:17.476712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:61712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.459 [2024-11-19 00:09:17.476732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:52.459 [2024-11-19 00:09:17.476759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:61720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.459 [2024-11-19 00:09:17.476778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:52.459 [2024-11-19 00:09:17.476805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:61728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.459 [2024-11-19 00:09:17.476825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:52.459 [2024-11-19 00:09:17.476851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:61736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.459 [2024-11-19 00:09:17.476871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:52.459 [2024-11-19 00:09:17.476913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:61744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.459 [2024-11-19 00:09:17.476933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:52.459 [2024-11-19 00:09:17.476959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:61752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.459 [2024-11-19 00:09:17.476979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:52.459 [2024-11-19 00:09:17.477011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:61760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.459 [2024-11-19 00:09:17.477032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:52.459 [2024-11-19 00:09:17.477059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:61768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.459 [2024-11-19 00:09:17.477078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:52.459 [2024-11-19 00:09:17.477104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:61776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.459 [2024-11-19 00:09:17.477123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:52.459 [2024-11-19 00:09:17.477149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:61784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.459 [2024-11-19 00:09:17.477168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:52.459 [2024-11-19 00:09:17.477195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:61792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.459 [2024-11-19 00:09:17.477223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:52.459 [2024-11-19 00:09:17.477251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:61800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.459 [2024-11-19 00:09:17.477271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:52.459 [2024-11-19 00:09:17.477297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:61808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.459 [2024-11-19 00:09:17.477316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:52.459 [2024-11-19 00:09:17.477342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:61816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.459 [2024-11-19 00:09:17.477362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.459 [2024-11-19 00:09:17.477388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:61208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.459 [2024-11-19 00:09:17.477407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.459 [2024-11-19 00:09:17.477433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:61216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.459 [2024-11-19 00:09:17.477452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:52.459 [2024-11-19 00:09:17.477479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:61224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.459 [2024-11-19 00:09:17.477499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:52.459 [2024-11-19 00:09:17.477525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:61232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.459 [2024-11-19 00:09:17.477544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:52.459 [2024-11-19 00:09:17.477570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:61240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.459 [2024-11-19 00:09:17.477589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:52.459 [2024-11-19 00:09:17.477649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:61248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.459 [2024-11-19 00:09:17.477682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:52.459 [2024-11-19 00:09:17.477712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:61256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.459 [2024-11-19 00:09:17.477733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:52.459 [2024-11-19 00:09:17.477760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:61264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.459 [2024-11-19 00:09:17.477780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:52.459 [2024-11-19 00:09:17.477808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:61824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.459 [2024-11-19 00:09:17.477828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:52.459 [2024-11-19 00:09:17.477863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:61832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.459 [2024-11-19 00:09:17.477885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:52.459 [2024-11-19 00:09:17.477912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:61840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.459 [2024-11-19 00:09:17.477933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:52.459 [2024-11-19 00:09:17.477976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:61848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.459 [2024-11-19 00:09:17.477995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:52.459 [2024-11-19 00:09:17.478036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:61856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.459 [2024-11-19 00:09:17.478056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:52.459 [2024-11-19 00:09:17.478082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:61864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.459 [2024-11-19 00:09:17.478101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:52.459 [2024-11-19 00:09:17.478127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:61872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.459 [2024-11-19 00:09:17.478147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:52.459 [2024-11-19 00:09:17.478172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:61880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.459 [2024-11-19 00:09:17.478192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:52.459 [2024-11-19 00:09:17.478218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:61888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.459 [2024-11-19 00:09:17.478237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:52.459 [2024-11-19 00:09:17.478263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:61896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.459 [2024-11-19 00:09:17.478282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:52.459 [2024-11-19 00:09:17.478308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:61904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.459 [2024-11-19 00:09:17.478327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:52.459 [2024-11-19 00:09:17.478353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:61912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.459 [2024-11-19 00:09:17.478372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:52.459 [2024-11-19 00:09:17.478398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:61920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.460 [2024-11-19 00:09:17.478418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:52.460 [2024-11-19 00:09:17.478451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:61928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.460 [2024-11-19 00:09:17.478471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:52.460 [2024-11-19 00:09:17.478498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:61936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.460 [2024-11-19 00:09:17.478517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:52.460 [2024-11-19 00:09:17.478543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:61944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.460 [2024-11-19 00:09:17.478562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:52.460 [2024-11-19 00:09:17.478625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:61952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.460 [2024-11-19 00:09:17.478665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:52.460 [2024-11-19 00:09:17.478696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:61960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.460 [2024-11-19 00:09:17.478716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:52.460 [2024-11-19 00:09:17.478743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:61968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.460 [2024-11-19 00:09:17.478763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:52.460 [2024-11-19 00:09:17.478791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:61272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.460 [2024-11-19 00:09:17.478811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:52.460 [2024-11-19 00:09:17.478838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:61280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.460 [2024-11-19 00:09:17.478858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:52.460 [2024-11-19 00:09:17.478885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:61288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.460 [2024-11-19 00:09:17.478904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:52.460 [2024-11-19 00:09:17.478947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:61296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.460 [2024-11-19 00:09:17.478966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:52.460 [2024-11-19 00:09:17.478992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:61304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.460 [2024-11-19 00:09:17.479011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:52.460 [2024-11-19 00:09:17.479037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:61312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.460 [2024-11-19 00:09:17.479057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:52.460 [2024-11-19 00:09:17.479083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:61320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.460 [2024-11-19 00:09:17.479110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:52.460 [2024-11-19 00:09:17.479138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:61328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.460 [2024-11-19 00:09:17.479158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:52.460 [2024-11-19 00:09:17.479184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:61336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.460 [2024-11-19 00:09:17.479203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:52.460 [2024-11-19 00:09:17.479229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:61344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.460 [2024-11-19 00:09:17.479248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:52.460 [2024-11-19 00:09:17.479274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:61352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.460 [2024-11-19 00:09:17.479293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:52.460 [2024-11-19 00:09:17.479337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:61360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.460 [2024-11-19 00:09:17.479357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:52.460 [2024-11-19 00:09:17.479384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:61368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.460 [2024-11-19 00:09:17.479403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:52.460 [2024-11-19 00:09:17.479429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:61376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.460 [2024-11-19 00:09:17.479448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:52.460 [2024-11-19 00:09:17.479474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:61384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.460 [2024-11-19 00:09:17.479493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:52.460 [2024-11-19 00:09:17.479520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:61392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.460 [2024-11-19 00:09:17.479539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:52.460 [2024-11-19 00:09:17.479565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:61976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.460 [2024-11-19 00:09:17.479585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:52.460 [2024-11-19 00:09:17.479623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:61984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.460 [2024-11-19 00:09:17.479662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:52.460 [2024-11-19 00:09:17.479690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:61992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.460 [2024-11-19 00:09:17.479718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:52.460 [2024-11-19 00:09:17.479747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:62000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.460 [2024-11-19 00:09:17.479767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:52.460 [2024-11-19 00:09:17.479794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:62008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.460 [2024-11-19 00:09:17.479814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:52.460 [2024-11-19 00:09:17.479840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:62016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.460 [2024-11-19 00:09:17.479860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:52.460 [2024-11-19 00:09:17.479887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:62024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.460 [2024-11-19 00:09:17.479907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:52.460 [2024-11-19 00:09:17.479933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:62032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.460 [2024-11-19 00:09:17.479953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:52.460 [2024-11-19 00:09:17.479980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:61400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.460 [2024-11-19 00:09:17.480014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:52.460 [2024-11-19 00:09:17.480040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:61408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.460 [2024-11-19 00:09:17.480060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:52.460 [2024-11-19 00:09:17.480085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:61416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.460 [2024-11-19 00:09:17.480105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:52.461 [2024-11-19 00:09:17.480131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:61424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.461 [2024-11-19 00:09:17.480150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:52.461 [2024-11-19 00:09:17.480176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:61432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.461 [2024-11-19 00:09:17.480195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:52.461 [2024-11-19 00:09:17.480221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:61440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.461 [2024-11-19 00:09:17.480241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:52.461 [2024-11-19 00:09:17.480267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:61448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.461 [2024-11-19 00:09:17.480301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:52.461 [2024-11-19 00:09:17.480349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:61456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.461 [2024-11-19 00:09:17.480372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:52.461 [2024-11-19 00:09:17.480399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:61464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.461 [2024-11-19 00:09:17.480420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:52.461 [2024-11-19 00:09:17.480448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:61472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.461 [2024-11-19 00:09:17.480469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:52.461 [2024-11-19 00:09:17.480497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:61480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.461 [2024-11-19 00:09:17.480517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:52.461 [2024-11-19 00:09:17.480544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:61488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.461 [2024-11-19 00:09:17.480565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:52.461 [2024-11-19 00:09:17.480593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:61496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.461 [2024-11-19 00:09:17.480656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:52.461 [2024-11-19 00:09:17.480722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:61504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.461 [2024-11-19 00:09:17.480744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:52.461 [2024-11-19 00:09:17.480771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:61512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.461 [2024-11-19 00:09:17.480792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:52.461 [2024-11-19 00:09:17.480819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:61520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.461 [2024-11-19 00:09:17.480839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:52.461 [2024-11-19 00:09:17.480866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:61528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.461 [2024-11-19 00:09:17.480886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:52.461 [2024-11-19 00:09:17.480913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:61536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.461 [2024-11-19 00:09:17.480934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:52.461 [2024-11-19 00:09:17.480961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:61544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.461 [2024-11-19 00:09:17.480981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:52.461 [2024-11-19 00:09:17.481032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:61552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.461 [2024-11-19 00:09:17.481068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:52.461 [2024-11-19 00:09:17.481094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:61016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.461 [2024-11-19 00:09:17.481113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:52.461 [2024-11-19 00:09:17.481139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:61024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.461 [2024-11-19 00:09:17.481158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:52.461 [2024-11-19 00:09:17.481184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:61032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.461 [2024-11-19 00:09:17.481203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:52.461 [2024-11-19 00:09:17.481229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:61040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.461 [2024-11-19 00:09:17.481248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:52.461 [2024-11-19 00:09:17.481274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:61048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.461 [2024-11-19 00:09:17.481293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:52.461 [2024-11-19 00:09:17.481319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:61056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.461 [2024-11-19 00:09:17.481339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:52.461 [2024-11-19 00:09:17.481365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:61064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.461 [2024-11-19 00:09:17.481384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:52.461 [2024-11-19 00:09:17.481410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:61072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.461 [2024-11-19 00:09:17.481430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:52.461 [2024-11-19 00:09:17.481456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:61560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.461 [2024-11-19 00:09:17.481475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:52.461 [2024-11-19 00:09:17.483898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:61568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.461 [2024-11-19 00:09:17.483935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:52.461 [2024-11-19 00:09:17.484003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:61576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.461 [2024-11-19 00:09:17.484033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:52.461 [2024-11-19 00:09:17.484074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:61584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.461 [2024-11-19 00:09:17.484096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:52.461 [2024-11-19 00:09:17.484122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:61592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.461 [2024-11-19 00:09:17.484142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:52.461 [2024-11-19 00:09:17.484168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:61600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.461 [2024-11-19 00:09:17.484187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:52.461 [2024-11-19 00:09:17.484213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:61608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.461 [2024-11-19 00:09:17.484233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:52.461 [2024-11-19 00:09:17.484260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:61616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.461 [2024-11-19 00:09:17.484279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:52.461 [2024-11-19 00:09:17.484349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:61624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.461 [2024-11-19 00:09:17.484372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:52.461 [2024-11-19 00:09:17.484399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:61632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.461 [2024-11-19 00:09:17.484420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:52.461 [2024-11-19 00:09:17.484448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:61640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.461 [2024-11-19 00:09:17.484468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:52.461 [2024-11-19 00:09:17.484495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:61648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.461 [2024-11-19 00:09:17.484516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:52.461 [2024-11-19 00:09:17.484544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:61656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.461 [2024-11-19 00:09:17.484565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:52.461 [2024-11-19 00:09:17.484593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:61664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.462 [2024-11-19 00:09:17.484614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:52.462 [2024-11-19 00:09:17.484684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:61672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.462 [2024-11-19 00:09:17.484705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:52.462 [2024-11-19 00:09:17.484732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:61680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.462 [2024-11-19 00:09:17.484760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:52.462 [2024-11-19 00:09:17.484787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:61688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.462 [2024-11-19 00:09:17.484807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:52.462 [2024-11-19 00:09:17.484833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:61696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.462 [2024-11-19 00:09:17.484853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:52.462 [2024-11-19 00:09:17.484878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:61080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.462 [2024-11-19 00:09:17.484897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:52.462 [2024-11-19 00:09:17.484923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:61088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.462 [2024-11-19 00:09:17.484943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:52.462 [2024-11-19 00:09:17.484969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:61096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.462 [2024-11-19 00:09:17.484988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:52.462 [2024-11-19 00:09:17.485014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:61104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.462 [2024-11-19 00:09:17.485033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:52.462 [2024-11-19 00:09:17.485060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:61112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.462 [2024-11-19 00:09:17.485079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:52.462 [2024-11-19 00:09:17.485121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:61120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.462 [2024-11-19 00:09:17.485141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:52.462 [2024-11-19 00:09:17.485167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:61128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.462 [2024-11-19 00:09:17.485187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:52.462 [2024-11-19 00:09:17.485213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:61136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.462 [2024-11-19 00:09:17.485233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:52.462 [2024-11-19 00:09:17.485259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:61144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.462 [2024-11-19 00:09:17.485278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:52.462 [2024-11-19 00:09:17.485304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:61152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.462 [2024-11-19 00:09:17.485331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:52.462 [2024-11-19 00:09:17.485360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:61160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.462 [2024-11-19 00:09:17.485379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:52.462 [2024-11-19 00:09:17.485406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:61168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.462 [2024-11-19 00:09:17.485425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:52.462 [2024-11-19 00:09:17.485450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:61176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.462 [2024-11-19 00:09:17.485470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:52.462 [2024-11-19 00:09:17.485496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:61184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.462 [2024-11-19 00:09:17.485515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:52.462 [2024-11-19 00:09:17.485541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:61192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.462 [2024-11-19 00:09:17.485560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:52.462 [2024-11-19 00:09:17.485586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:61200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.462 [2024-11-19 00:09:17.485620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:52.462 [2024-11-19 00:09:17.485663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:61704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.462 [2024-11-19 00:09:17.485683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:52.462 [2024-11-19 00:09:17.485710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:61712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.462 [2024-11-19 00:09:17.485730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:52.462 [2024-11-19 00:09:17.485757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:61720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.462 [2024-11-19 00:09:17.485777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:52.462 [2024-11-19 00:09:17.485804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:61728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.462 [2024-11-19 00:09:17.485824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:52.462 [2024-11-19 00:09:17.485850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:61736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.462 [2024-11-19 00:09:17.485870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:52.462 [2024-11-19 00:09:17.485899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:61744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.462 [2024-11-19 00:09:17.485919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:52.462 [2024-11-19 00:09:17.485961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:61752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.462 [2024-11-19 00:09:17.485983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:52.462 [2024-11-19 00:09:17.486026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:61760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.462 [2024-11-19 00:09:17.486045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:52.462 [2024-11-19 00:09:17.486071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:61768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.462 [2024-11-19 00:09:17.486090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:52.462 [2024-11-19 00:09:17.486115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:61776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.462 [2024-11-19 00:09:17.486134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:52.462 [2024-11-19 00:09:17.486160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:61784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.462 [2024-11-19 00:09:17.486179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:52.462 [2024-11-19 00:09:17.486205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:61792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.462 [2024-11-19 00:09:17.486225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:52.462 [2024-11-19 00:09:17.486251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:61800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.462 [2024-11-19 00:09:17.486270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:52.462 [2024-11-19 00:09:17.486296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:61808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.462 [2024-11-19 00:09:17.486315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:52.462 [2024-11-19 00:09:17.486341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:61816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.462 [2024-11-19 00:09:17.486360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.462 [2024-11-19 00:09:17.486385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:61208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.462 [2024-11-19 00:09:17.486404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.462 [2024-11-19 00:09:17.486431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:61216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.462 [2024-11-19 00:09:17.486450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:52.462 [2024-11-19 00:09:17.486476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:61224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.462 [2024-11-19 00:09:17.486495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:52.462 [2024-11-19 00:09:17.486528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:61232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.463 [2024-11-19 00:09:17.486549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:52.463 [2024-11-19 00:09:17.486575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:61240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.463 [2024-11-19 00:09:17.486595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:52.463 [2024-11-19 00:09:17.486633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:61248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.463 [2024-11-19 00:09:17.486656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:52.463 [2024-11-19 00:09:17.486682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:61256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.463 [2024-11-19 00:09:17.486702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:52.463 [2024-11-19 00:09:17.486728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:61264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.463 [2024-11-19 00:09:17.486747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:52.463 [2024-11-19 00:09:17.486773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:61824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.463 [2024-11-19 00:09:17.486792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:52.463 [2024-11-19 00:09:17.486819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:61832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.463 [2024-11-19 00:09:17.486838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:52.463 [2024-11-19 00:09:17.486864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:61840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.463 [2024-11-19 00:09:17.486883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:52.463 [2024-11-19 00:09:17.486909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:61848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.463 [2024-11-19 00:09:17.486928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:52.463 [2024-11-19 00:09:17.486955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:61856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.463 [2024-11-19 00:09:17.486974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:52.463 [2024-11-19 00:09:17.487001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:61864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.463 [2024-11-19 00:09:17.487020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:52.463 [2024-11-19 00:09:17.487046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:61872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.463 [2024-11-19 00:09:17.487065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:52.463 [2024-11-19 00:09:17.487108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:61880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.463 [2024-11-19 00:09:17.487135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:52.463 [2024-11-19 00:09:17.487163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:61888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.463 [2024-11-19 00:09:17.487183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:52.463 [2024-11-19 00:09:17.487209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:61896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.463 [2024-11-19 00:09:17.487229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:52.463 [2024-11-19 00:09:17.487256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:61904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.463 [2024-11-19 00:09:17.487276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:52.463 [2024-11-19 00:09:17.487302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:61912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.463 [2024-11-19 00:09:17.487322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:52.463 [2024-11-19 00:09:17.487349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:61920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.463 [2024-11-19 00:09:17.487369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:52.463 [2024-11-19 00:09:17.487396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:61928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.463 [2024-11-19 00:09:17.487431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:52.463 [2024-11-19 00:09:17.487458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:61936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.463 [2024-11-19 00:09:17.487477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:52.463 [2024-11-19 00:09:17.487522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:61944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.463 [2024-11-19 00:09:17.487546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:52.463 [2024-11-19 00:09:17.487573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:61952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.463 [2024-11-19 00:09:17.487592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:52.463 [2024-11-19 00:09:17.487619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:61960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.463 [2024-11-19 00:09:17.487652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:52.463 [2024-11-19 00:09:17.487681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:61968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.463 [2024-11-19 00:09:17.487701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:52.463 [2024-11-19 00:09:17.487727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:61272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.463 [2024-11-19 00:09:17.487755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:52.463 [2024-11-19 00:09:17.487784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:61280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.463 [2024-11-19 00:09:17.487804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:52.463 [2024-11-19 00:09:17.487830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:61288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.463 [2024-11-19 00:09:17.487849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:52.463 [2024-11-19 00:09:17.487875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:61296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.463 [2024-11-19 00:09:17.487894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:52.463 [2024-11-19 00:09:17.487920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:61304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.463 [2024-11-19 00:09:17.487939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:52.463 [2024-11-19 00:09:17.487965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:61312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.463 [2024-11-19 00:09:17.487984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:52.463 [2024-11-19 00:09:17.488010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:61320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.463 [2024-11-19 00:09:17.488029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:52.463 [2024-11-19 00:09:17.488055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:61328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.463 [2024-11-19 00:09:17.488074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:52.463 [2024-11-19 00:09:17.488100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:61336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.463 [2024-11-19 00:09:17.488119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:52.463 [2024-11-19 00:09:17.488145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:61344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.463 [2024-11-19 00:09:17.488165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:52.463 [2024-11-19 00:09:17.488191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:61352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.463 [2024-11-19 00:09:17.488210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:52.463 [2024-11-19 00:09:17.488251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:61360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.463 [2024-11-19 00:09:17.488272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:52.463 [2024-11-19 00:09:17.488323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:61368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.463 [2024-11-19 00:09:17.488362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:52.463 [2024-11-19 00:09:17.488400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:61376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.463 [2024-11-19 00:09:17.488422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:52.463 [2024-11-19 00:09:17.488449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:61384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.463 [2024-11-19 00:09:17.488470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:52.464 [2024-11-19 00:09:17.488498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:61392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.464 [2024-11-19 00:09:17.488518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:52.464 [2024-11-19 00:09:17.488555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:61976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.464 [2024-11-19 00:09:17.488575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:52.464 [2024-11-19 00:09:17.488604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:61984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.464 [2024-11-19 00:09:17.488639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:52.464 [2024-11-19 00:09:17.488695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:61992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.464 [2024-11-19 00:09:17.488715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:52.464 [2024-11-19 00:09:17.488741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:62000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.464 [2024-11-19 00:09:17.488760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:52.464 [2024-11-19 00:09:17.488787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:62008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.464 [2024-11-19 00:09:17.488806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:52.464 [2024-11-19 00:09:17.488832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:62016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.464 [2024-11-19 00:09:17.488850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:52.464 [2024-11-19 00:09:17.488877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:62024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.464 [2024-11-19 00:09:17.488896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:52.464 [2024-11-19 00:09:17.488923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:62032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.464 [2024-11-19 00:09:17.488942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:52.464 [2024-11-19 00:09:17.488968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:61400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.464 [2024-11-19 00:09:17.488987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:52.464 [2024-11-19 00:09:17.489022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:61408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.464 [2024-11-19 00:09:17.489042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:52.464 [2024-11-19 00:09:17.489068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.464 [2024-11-19 00:09:17.489087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:52.464 [2024-11-19 00:09:17.489113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:61424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.464 [2024-11-19 00:09:17.489133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:52.464 [2024-11-19 00:09:17.489159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:61432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.464 [2024-11-19 00:09:17.489178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:52.464 [2024-11-19 00:09:17.489204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:61440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.464 [2024-11-19 00:09:17.489223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:52.464 [2024-11-19 00:09:17.489249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:61448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.464 [2024-11-19 00:09:17.489268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:52.464 [2024-11-19 00:09:17.489295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:61456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.464 [2024-11-19 00:09:17.489314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:52.464 [2024-11-19 00:09:17.489340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:61464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.464 [2024-11-19 00:09:17.489359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:52.464 [2024-11-19 00:09:17.489386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:61472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.464 [2024-11-19 00:09:17.489405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:52.464 [2024-11-19 00:09:17.489431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:61480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.464 [2024-11-19 00:09:17.489450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:52.464 [2024-11-19 00:09:17.489476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:61488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.464 [2024-11-19 00:09:17.489496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:52.464 [2024-11-19 00:09:17.489527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:61496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.464 [2024-11-19 00:09:17.489547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:52.464 [2024-11-19 00:09:17.489574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:61504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.464 [2024-11-19 00:09:17.489600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:52.464 [2024-11-19 00:09:17.489656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:61512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.464 [2024-11-19 00:09:17.489677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:52.464 [2024-11-19 00:09:17.489704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:61520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.464 [2024-11-19 00:09:17.489723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:52.464 [2024-11-19 00:09:17.489750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:61528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.464 [2024-11-19 00:09:17.489770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:52.464 [2024-11-19 00:09:17.489797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:61536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.464 [2024-11-19 00:09:17.489816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:52.464 [2024-11-19 00:09:17.489843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:61544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.464 [2024-11-19 00:09:17.489863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:52.464 [2024-11-19 00:09:17.489890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:61552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.464 [2024-11-19 00:09:17.489910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:52.464 [2024-11-19 00:09:17.489936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:61016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.464 [2024-11-19 00:09:17.489956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:52.464 [2024-11-19 00:09:17.489982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:61024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.464 [2024-11-19 00:09:17.490002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:52.464 [2024-11-19 00:09:17.490043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:61032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.464 [2024-11-19 00:09:17.490062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:52.464 [2024-11-19 00:09:17.490087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:61040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.464 [2024-11-19 00:09:17.490106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:52.464 [2024-11-19 00:09:17.490132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:61048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.464 [2024-11-19 00:09:17.490152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:52.464 [2024-11-19 00:09:17.490178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:61056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.465 [2024-11-19 00:09:17.490204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:52.465 [2024-11-19 00:09:17.490232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:61064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.465 [2024-11-19 00:09:17.490252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:52.465 [2024-11-19 00:09:17.490279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:61072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.465 [2024-11-19 00:09:17.490299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:52.465 [2024-11-19 00:09:17.490702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:61560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.465 [2024-11-19 00:09:17.490735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:52.465 7062.07 IOPS, 27.59 MiB/s [2024-11-19T00:09:59.157Z] 6742.25 IOPS, 26.34 MiB/s [2024-11-19T00:09:59.157Z] 6820.94 IOPS, 26.64 MiB/s [2024-11-19T00:09:59.157Z] 6889.11 IOPS, 26.91 MiB/s [2024-11-19T00:09:59.157Z] 6954.11 IOPS, 27.16 MiB/s [2024-11-19T00:09:59.157Z] 7013.00 IOPS, 27.39 MiB/s [2024-11-19T00:09:59.157Z] 7064.19 IOPS, 27.59 MiB/s [2024-11-19T00:09:59.157Z] [2024-11-19 00:09:24.632270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:64224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.465 [2024-11-19 00:09:24.632401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:52.465 [2024-11-19 00:09:24.632478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:64232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.465 [2024-11-19 00:09:24.632506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:52.465 [2024-11-19 00:09:24.632536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:64240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.465 [2024-11-19 00:09:24.632557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:52.465 [2024-11-19 00:09:24.632586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:64248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.465 [2024-11-19 00:09:24.632605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:52.465 [2024-11-19 00:09:24.632650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:64256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.465 [2024-11-19 00:09:24.632670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:52.465 [2024-11-19 00:09:24.632711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:64264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.465 [2024-11-19 00:09:24.632730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:52.465 [2024-11-19 00:09:24.632757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:64272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.465 [2024-11-19 00:09:24.632776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:52.465 [2024-11-19 00:09:24.632802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:64280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.465 [2024-11-19 00:09:24.632820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:52.465 [2024-11-19 00:09:24.632867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:63776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.465 [2024-11-19 00:09:24.632887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:52.465 [2024-11-19 00:09:24.632913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:63784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.465 [2024-11-19 00:09:24.632932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:52.465 [2024-11-19 00:09:24.632958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:63792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.465 [2024-11-19 00:09:24.632991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:52.465 [2024-11-19 00:09:24.633017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:63800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.465 [2024-11-19 00:09:24.633035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:52.465 [2024-11-19 00:09:24.633061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:63808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.465 [2024-11-19 00:09:24.633079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:52.465 [2024-11-19 00:09:24.633104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:63816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.465 [2024-11-19 00:09:24.633122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:52.465 [2024-11-19 00:09:24.633148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:63824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.465 [2024-11-19 00:09:24.633166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:52.465 [2024-11-19 00:09:24.633191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:63832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.465 [2024-11-19 00:09:24.633209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:52.465 [2024-11-19 00:09:24.633234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:63840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.465 [2024-11-19 00:09:24.633252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:52.465 [2024-11-19 00:09:24.633278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:63848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.465 [2024-11-19 00:09:24.633296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:52.465 [2024-11-19 00:09:24.633322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:63856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.465 [2024-11-19 00:09:24.633341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:52.465 [2024-11-19 00:09:24.633366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:63864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.465 [2024-11-19 00:09:24.633384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:52.465 [2024-11-19 00:09:24.633410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:63872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.465 [2024-11-19 00:09:24.633437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:52.465 [2024-11-19 00:09:24.633464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:63880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.465 [2024-11-19 00:09:24.633483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:52.465 [2024-11-19 00:09:24.633526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:63888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.465 [2024-11-19 00:09:24.633546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:52.465 [2024-11-19 00:09:24.633572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:63896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.465 [2024-11-19 00:09:24.633591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:52.465 [2024-11-19 00:09:24.633617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:63904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.465 [2024-11-19 00:09:24.633666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:52.465 [2024-11-19 00:09:24.633695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:63912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.465 [2024-11-19 00:09:24.633715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:52.465 [2024-11-19 00:09:24.633742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:63920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.465 [2024-11-19 00:09:24.633761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:52.465 [2024-11-19 00:09:24.633787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:63928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.465 [2024-11-19 00:09:24.633806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:52.465 [2024-11-19 00:09:24.633833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:63936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.465 [2024-11-19 00:09:24.633852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:52.465 [2024-11-19 00:09:24.633878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:63944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.465 [2024-11-19 00:09:24.633897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:52.465 [2024-11-19 00:09:24.633925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:63952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.465 [2024-11-19 00:09:24.633944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:52.465 [2024-11-19 00:09:24.633971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:63960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.465 [2024-11-19 00:09:24.633990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:52.465 [2024-11-19 00:09:24.634186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:64288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.465 [2024-11-19 00:09:24.634224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:52.465 [2024-11-19 00:09:24.634256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:64296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.465 [2024-11-19 00:09:24.634276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:52.466 [2024-11-19 00:09:24.634302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:64304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.466 [2024-11-19 00:09:24.634321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:52.466 [2024-11-19 00:09:24.634348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:64312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.466 [2024-11-19 00:09:24.634367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:52.466 [2024-11-19 00:09:24.634393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:64320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.466 [2024-11-19 00:09:24.634411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:52.466 [2024-11-19 00:09:24.634437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:64328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.466 [2024-11-19 00:09:24.634456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:52.466 [2024-11-19 00:09:24.634482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:64336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.466 [2024-11-19 00:09:24.634500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:52.466 [2024-11-19 00:09:24.634526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:64344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.466 [2024-11-19 00:09:24.634544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:52.466 [2024-11-19 00:09:24.634570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:64352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.466 [2024-11-19 00:09:24.634589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:52.466 [2024-11-19 00:09:24.634614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:64360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.466 [2024-11-19 00:09:24.634649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:52.466 [2024-11-19 00:09:24.634678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.466 [2024-11-19 00:09:24.634698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:52.466 [2024-11-19 00:09:24.634724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:64376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.466 [2024-11-19 00:09:24.634742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:52.466 [2024-11-19 00:09:24.634767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:64384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.466 [2024-11-19 00:09:24.634787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:52.466 [2024-11-19 00:09:24.634821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:64392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.466 [2024-11-19 00:09:24.634841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:52.466 [2024-11-19 00:09:24.634868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:64400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.466 [2024-11-19 00:09:24.634886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:52.466 [2024-11-19 00:09:24.634912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:64408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.466 [2024-11-19 00:09:24.634930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:52.466 [2024-11-19 00:09:24.634956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:63968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.466 [2024-11-19 00:09:24.634975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:52.466 [2024-11-19 00:09:24.635002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:63976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.466 [2024-11-19 00:09:24.635021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:52.466 [2024-11-19 00:09:24.635047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:63984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.466 [2024-11-19 00:09:24.635066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:52.466 [2024-11-19 00:09:24.635092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:63992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.466 [2024-11-19 00:09:24.635111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:52.466 [2024-11-19 00:09:24.635137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:64000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.466 [2024-11-19 00:09:24.635156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:52.466 [2024-11-19 00:09:24.635182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:64008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.466 [2024-11-19 00:09:24.635200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:52.466 [2024-11-19 00:09:24.635227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:64016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.466 [2024-11-19 00:09:24.635246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:52.466 [2024-11-19 00:09:24.635272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:64024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.466 [2024-11-19 00:09:24.635291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:52.466 [2024-11-19 00:09:24.635317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:64416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.466 [2024-11-19 00:09:24.635336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.466 [2024-11-19 00:09:24.635369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:64424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.466 [2024-11-19 00:09:24.635388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.466 [2024-11-19 00:09:24.635414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.466 [2024-11-19 00:09:24.635433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:52.466 [2024-11-19 00:09:24.635458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:64440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.466 [2024-11-19 00:09:24.635477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:52.466 [2024-11-19 00:09:24.635503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:64448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.466 [2024-11-19 00:09:24.635521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:52.466 [2024-11-19 00:09:24.635547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:64456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.466 [2024-11-19 00:09:24.635565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:52.466 [2024-11-19 00:09:24.635591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:64464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.466 [2024-11-19 00:09:24.635624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:52.466 [2024-11-19 00:09:24.635652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:64472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.466 [2024-11-19 00:09:24.635671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:52.466 [2024-11-19 00:09:24.635697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:64032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.466 [2024-11-19 00:09:24.635716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:52.466 [2024-11-19 00:09:24.635742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:64040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.466 [2024-11-19 00:09:24.635761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:52.466 [2024-11-19 00:09:24.635786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:64048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.466 [2024-11-19 00:09:24.635805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:52.466 [2024-11-19 00:09:24.635830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:64056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.466 [2024-11-19 00:09:24.635849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:52.466 [2024-11-19 00:09:24.635874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:64064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.466 [2024-11-19 00:09:24.635892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:52.466 [2024-11-19 00:09:24.635926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:64072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.466 [2024-11-19 00:09:24.635946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:52.466 [2024-11-19 00:09:24.635972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:64080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.466 [2024-11-19 00:09:24.635997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:52.466 [2024-11-19 00:09:24.636024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:64088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.466 [2024-11-19 00:09:24.636043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:52.466 [2024-11-19 00:09:24.636074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:64480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.467 [2024-11-19 00:09:24.636095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:52.467 [2024-11-19 00:09:24.636120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:64488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.467 [2024-11-19 00:09:24.636139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:52.467 [2024-11-19 00:09:24.636164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:64496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.467 [2024-11-19 00:09:24.636183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:52.467 [2024-11-19 00:09:24.636208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:64504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.467 [2024-11-19 00:09:24.636226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:52.467 [2024-11-19 00:09:24.636251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:64512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.467 [2024-11-19 00:09:24.636270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:52.467 [2024-11-19 00:09:24.636320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:64520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.467 [2024-11-19 00:09:24.636358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:52.467 [2024-11-19 00:09:24.636409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:64528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.467 [2024-11-19 00:09:24.636428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:52.467 [2024-11-19 00:09:24.636456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:64536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.467 [2024-11-19 00:09:24.636476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:52.467 [2024-11-19 00:09:24.636503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:64544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.467 [2024-11-19 00:09:24.636523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:52.467 [2024-11-19 00:09:24.636551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:64552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.467 [2024-11-19 00:09:24.636582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:52.467 [2024-11-19 00:09:24.636612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:64560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.467 [2024-11-19 00:09:24.636647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:52.467 [2024-11-19 00:09:24.636688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:64568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.467 [2024-11-19 00:09:24.636724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:52.467 [2024-11-19 00:09:24.636750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:64576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.467 [2024-11-19 00:09:24.636769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:52.467 [2024-11-19 00:09:24.636795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:64584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.467 [2024-11-19 00:09:24.636814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:52.467 [2024-11-19 00:09:24.636879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:64592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.467 [2024-11-19 00:09:24.636900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:52.467 [2024-11-19 00:09:24.636926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:64600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.467 [2024-11-19 00:09:24.636945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:52.467 [2024-11-19 00:09:24.636971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:64608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.467 [2024-11-19 00:09:24.636990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:52.467 [2024-11-19 00:09:24.637015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:64616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.467 [2024-11-19 00:09:24.637035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:52.467 [2024-11-19 00:09:24.637060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:64624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.467 [2024-11-19 00:09:24.637079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:52.467 [2024-11-19 00:09:24.637105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:64632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.467 [2024-11-19 00:09:24.637124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:52.467 [2024-11-19 00:09:24.637150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:64640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.467 [2024-11-19 00:09:24.637169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:52.467 [2024-11-19 00:09:24.637194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:64648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.467 [2024-11-19 00:09:24.637222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:52.467 [2024-11-19 00:09:24.637251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:64656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.467 [2024-11-19 00:09:24.637270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:52.467 [2024-11-19 00:09:24.637296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:64664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.467 [2024-11-19 00:09:24.637315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:52.467 [2024-11-19 00:09:24.637341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:64096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.467 [2024-11-19 00:09:24.637360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:52.467 [2024-11-19 00:09:24.637404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:64104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.467 [2024-11-19 00:09:24.637424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:52.467 [2024-11-19 00:09:24.637451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:64112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.467 [2024-11-19 00:09:24.637470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:52.467 [2024-11-19 00:09:24.637497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:64120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.467 [2024-11-19 00:09:24.637520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:52.467 [2024-11-19 00:09:24.637547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:64128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.467 [2024-11-19 00:09:24.637566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:52.467 [2024-11-19 00:09:24.637593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:64136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.467 [2024-11-19 00:09:24.637628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:52.467 [2024-11-19 00:09:24.637671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:64144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.467 [2024-11-19 00:09:24.637695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:52.467 [2024-11-19 00:09:24.637722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:64152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.467 [2024-11-19 00:09:24.637742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:52.467 [2024-11-19 00:09:24.637769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:64160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.467 [2024-11-19 00:09:24.637789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:52.467 [2024-11-19 00:09:24.637832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:64168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.467 [2024-11-19 00:09:24.637851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:52.467 [2024-11-19 00:09:24.637886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:64176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.467 [2024-11-19 00:09:24.637906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:52.467 [2024-11-19 00:09:24.637932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:64184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.467 [2024-11-19 00:09:24.637952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:52.467 [2024-11-19 00:09:24.637978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:64192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.467 [2024-11-19 00:09:24.638012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:52.467 [2024-11-19 00:09:24.638039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:64200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.467 [2024-11-19 00:09:24.638058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:52.467 [2024-11-19 00:09:24.638811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:64208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.467 [2024-11-19 00:09:24.638845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:52.468 [2024-11-19 00:09:24.638896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:64216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.468 [2024-11-19 00:09:24.638923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:52.468 [2024-11-19 00:09:24.638957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:64672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.468 [2024-11-19 00:09:24.638978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:52.468 [2024-11-19 00:09:24.639013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:64680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.468 [2024-11-19 00:09:24.639033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:52.468 [2024-11-19 00:09:24.639066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:64688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.468 [2024-11-19 00:09:24.639085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:52.468 [2024-11-19 00:09:24.639118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:64696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.468 [2024-11-19 00:09:24.639138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:52.468 [2024-11-19 00:09:24.639171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:64704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.468 [2024-11-19 00:09:24.639191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:52.468 [2024-11-19 00:09:24.639223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:64712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.468 [2024-11-19 00:09:24.639243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:52.468 [2024-11-19 00:09:24.639289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:64720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.468 [2024-11-19 00:09:24.639311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:52.468 [2024-11-19 00:09:24.639363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:64728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.468 [2024-11-19 00:09:24.639388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:52.468 [2024-11-19 00:09:24.639422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:64736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.468 [2024-11-19 00:09:24.639442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:52.468 [2024-11-19 00:09:24.639476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:64744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.468 [2024-11-19 00:09:24.639495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:52.468 [2024-11-19 00:09:24.639527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:64752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.468 [2024-11-19 00:09:24.639547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:52.468 [2024-11-19 00:09:24.639580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:64760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.468 [2024-11-19 00:09:24.639615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:52.468 7064.91 IOPS, 27.60 MiB/s [2024-11-19T00:09:59.160Z] 6757.74 IOPS, 26.40 MiB/s [2024-11-19T00:09:59.160Z] 6476.17 IOPS, 25.30 MiB/s [2024-11-19T00:09:59.160Z] 6217.12 IOPS, 24.29 MiB/s [2024-11-19T00:09:59.160Z] 5978.00 IOPS, 23.35 MiB/s [2024-11-19T00:09:59.160Z] 5756.59 IOPS, 22.49 MiB/s [2024-11-19T00:09:59.160Z] 5551.00 IOPS, 21.68 MiB/s [2024-11-19T00:09:59.160Z] 5380.10 IOPS, 21.02 MiB/s [2024-11-19T00:09:59.160Z] 5466.90 IOPS, 21.36 MiB/s [2024-11-19T00:09:59.160Z] 5548.87 IOPS, 21.68 MiB/s [2024-11-19T00:09:59.160Z] 5624.72 IOPS, 21.97 MiB/s [2024-11-19T00:09:59.160Z] 5697.18 IOPS, 22.25 MiB/s [2024-11-19T00:09:59.160Z] 5767.74 IOPS, 22.53 MiB/s [2024-11-19T00:09:59.160Z] 5829.69 IOPS, 22.77 MiB/s [2024-11-19T00:09:59.160Z] [2024-11-19 00:09:38.013157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:77240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.468 [2024-11-19 00:09:38.013238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:52.468 [2024-11-19 00:09:38.013312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:77248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.468 [2024-11-19 00:09:38.013339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:52.468 [2024-11-19 00:09:38.013369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:77256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.468 [2024-11-19 00:09:38.013388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:52.468 [2024-11-19 00:09:38.013415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:77264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.468 [2024-11-19 00:09:38.013434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:52.468 [2024-11-19 00:09:38.013460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:77272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.468 [2024-11-19 00:09:38.013479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:52.468 [2024-11-19 00:09:38.013522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:77280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.468 [2024-11-19 00:09:38.013543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:52.468 [2024-11-19 00:09:38.013569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:77288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.468 [2024-11-19 00:09:38.013587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:52.468 [2024-11-19 00:09:38.013659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:77296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.468 [2024-11-19 00:09:38.013681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:52.468 [2024-11-19 00:09:38.013710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:77304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.468 [2024-11-19 00:09:38.013731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:52.468 [2024-11-19 00:09:38.013758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:77312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.468 [2024-11-19 00:09:38.013778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:52.468 [2024-11-19 00:09:38.013805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:77320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.468 [2024-11-19 00:09:38.013824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:52.468 [2024-11-19 00:09:38.013852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:77328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.468 [2024-11-19 00:09:38.013872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:52.468 [2024-11-19 00:09:38.013899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:77336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.468 [2024-11-19 00:09:38.013920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:52.468 [2024-11-19 00:09:38.013963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:77344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.468 [2024-11-19 00:09:38.013982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:52.468 [2024-11-19 00:09:38.014024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:77352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.468 [2024-11-19 00:09:38.014042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.468 [2024-11-19 00:09:38.014068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:77360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.468 [2024-11-19 00:09:38.014086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.468 [2024-11-19 00:09:38.014112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:77368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.468 [2024-11-19 00:09:38.014131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:52.468 [2024-11-19 00:09:38.014156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:77376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.468 [2024-11-19 00:09:38.014184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:52.468 [2024-11-19 00:09:38.014212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:77384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.468 [2024-11-19 00:09:38.014231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:52.468 [2024-11-19 00:09:38.014257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:77392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.468 [2024-11-19 00:09:38.014276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:52.468 [2024-11-19 00:09:38.014303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:76728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.468 [2024-11-19 00:09:38.014322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:52.468 [2024-11-19 00:09:38.014348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:76736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.468 [2024-11-19 00:09:38.014367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:52.468 [2024-11-19 00:09:38.014394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:76744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.468 [2024-11-19 00:09:38.014413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:52.468 [2024-11-19 00:09:38.014438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:76752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.469 [2024-11-19 00:09:38.014457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:52.469 [2024-11-19 00:09:38.014483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:76760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.469 [2024-11-19 00:09:38.014502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:52.469 [2024-11-19 00:09:38.014528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:76768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.469 [2024-11-19 00:09:38.014547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:52.469 [2024-11-19 00:09:38.014591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:76776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.469 [2024-11-19 00:09:38.014644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:52.469 [2024-11-19 00:09:38.014687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:76784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.469 [2024-11-19 00:09:38.014710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:52.469 [2024-11-19 00:09:38.014737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:76792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.469 [2024-11-19 00:09:38.014758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:52.469 [2024-11-19 00:09:38.014786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:76800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.469 [2024-11-19 00:09:38.014815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:52.469 [2024-11-19 00:09:38.014845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:76808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.469 [2024-11-19 00:09:38.014865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:52.469 [2024-11-19 00:09:38.014893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:76816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.469 [2024-11-19 00:09:38.014913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:52.469 [2024-11-19 00:09:38.014956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:76824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.469 [2024-11-19 00:09:38.014976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:52.469 [2024-11-19 00:09:38.015003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:76832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.469 [2024-11-19 00:09:38.015023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:52.469 [2024-11-19 00:09:38.015064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:76840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.469 [2024-11-19 00:09:38.015083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:52.469 [2024-11-19 00:09:38.015109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:76848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.469 [2024-11-19 00:09:38.015128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:52.469 [2024-11-19 00:09:38.015154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:77400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.469 [2024-11-19 00:09:38.015173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:52.469 [2024-11-19 00:09:38.015199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:77408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.469 [2024-11-19 00:09:38.015218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:52.469 [2024-11-19 00:09:38.015243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:77416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.469 [2024-11-19 00:09:38.015262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:52.469 [2024-11-19 00:09:38.015289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:77424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.469 [2024-11-19 00:09:38.015308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:52.469 [2024-11-19 00:09:38.015369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:77432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.469 [2024-11-19 00:09:38.015395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.469 [2024-11-19 00:09:38.015416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:77440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.469 [2024-11-19 00:09:38.015434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.469 [2024-11-19 00:09:38.015462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:77448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.469 [2024-11-19 00:09:38.015480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.469 [2024-11-19 00:09:38.015499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:77456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.469 [2024-11-19 00:09:38.015516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.469 [2024-11-19 00:09:38.015534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:77464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.469 [2024-11-19 00:09:38.015551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.469 [2024-11-19 00:09:38.015569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:77472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.469 [2024-11-19 00:09:38.015603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.469 [2024-11-19 00:09:38.015693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:77480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.469 [2024-11-19 00:09:38.015715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.469 [2024-11-19 00:09:38.015735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:77488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.469 [2024-11-19 00:09:38.015754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.469 [2024-11-19 00:09:38.015773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:76856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.469 [2024-11-19 00:09:38.015792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.469 [2024-11-19 00:09:38.015812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:76864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.469 [2024-11-19 00:09:38.015830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.469 [2024-11-19 00:09:38.015850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:76872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.469 [2024-11-19 00:09:38.015868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.469 [2024-11-19 00:09:38.015888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:76880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.469 [2024-11-19 00:09:38.015906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.469 [2024-11-19 00:09:38.015926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:76888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.469 [2024-11-19 00:09:38.015959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.469 [2024-11-19 00:09:38.015979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:76896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.469 [2024-11-19 00:09:38.016027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.469 [2024-11-19 00:09:38.016045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:76904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.469 [2024-11-19 00:09:38.016068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.469 [2024-11-19 00:09:38.016088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:76912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.469 [2024-11-19 00:09:38.016105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.469 [2024-11-19 00:09:38.016123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:76920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.469 [2024-11-19 00:09:38.016139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.469 [2024-11-19 00:09:38.016158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:76928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.469 [2024-11-19 00:09:38.016175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.470 [2024-11-19 00:09:38.016193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:76936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.470 [2024-11-19 00:09:38.016209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.470 [2024-11-19 00:09:38.016227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:76944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.470 [2024-11-19 00:09:38.016244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.470 [2024-11-19 00:09:38.016262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:76952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.470 [2024-11-19 00:09:38.016279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.470 [2024-11-19 00:09:38.016297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:76960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.470 [2024-11-19 00:09:38.016360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.470 [2024-11-19 00:09:38.016397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:76968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.470 [2024-11-19 00:09:38.016416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.470 [2024-11-19 00:09:38.016437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:76976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.470 [2024-11-19 00:09:38.016456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.470 [2024-11-19 00:09:38.016484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:77496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.470 [2024-11-19 00:09:38.016503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.470 [2024-11-19 00:09:38.016525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:77504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.470 [2024-11-19 00:09:38.016546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.470 [2024-11-19 00:09:38.016567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:77512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.470 [2024-11-19 00:09:38.016586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.470 [2024-11-19 00:09:38.016615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:77520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.470 [2024-11-19 00:09:38.016636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.470 [2024-11-19 00:09:38.016736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:77528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.470 [2024-11-19 00:09:38.016758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.470 [2024-11-19 00:09:38.016778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:77536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.470 [2024-11-19 00:09:38.016796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.470 [2024-11-19 00:09:38.016815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:77544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.470 [2024-11-19 00:09:38.016833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.470 [2024-11-19 00:09:38.016852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:77552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.470 [2024-11-19 00:09:38.016870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.470 [2024-11-19 00:09:38.016889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:77560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.470 [2024-11-19 00:09:38.016906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.470 [2024-11-19 00:09:38.016925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:77568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.470 [2024-11-19 00:09:38.016959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.470 [2024-11-19 00:09:38.016992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:77576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.470 [2024-11-19 00:09:38.017009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.470 [2024-11-19 00:09:38.017028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:77584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.470 [2024-11-19 00:09:38.017044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.470 [2024-11-19 00:09:38.017062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:77592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.470 [2024-11-19 00:09:38.017079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.470 [2024-11-19 00:09:38.017097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:77600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.470 [2024-11-19 00:09:38.017113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.470 [2024-11-19 00:09:38.017131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:77608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.470 [2024-11-19 00:09:38.017147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.470 [2024-11-19 00:09:38.017166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:77616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.470 [2024-11-19 00:09:38.017190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.470 [2024-11-19 00:09:38.017212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:76984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.470 [2024-11-19 00:09:38.017229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.470 [2024-11-19 00:09:38.017249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:76992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.470 [2024-11-19 00:09:38.017266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.470 [2024-11-19 00:09:38.017285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:77000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.470 [2024-11-19 00:09:38.017301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.470 [2024-11-19 00:09:38.017320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:77008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.470 [2024-11-19 00:09:38.017337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.470 [2024-11-19 00:09:38.017355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:77016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.470 [2024-11-19 00:09:38.017372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.470 [2024-11-19 00:09:38.017391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:77024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.470 [2024-11-19 00:09:38.017407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.470 [2024-11-19 00:09:38.017425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:77032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.470 [2024-11-19 00:09:38.017443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.470 [2024-11-19 00:09:38.017461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:77040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.470 [2024-11-19 00:09:38.017478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.470 [2024-11-19 00:09:38.017496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:77048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.470 [2024-11-19 00:09:38.017512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.470 [2024-11-19 00:09:38.017531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:77056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.470 [2024-11-19 00:09:38.017547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.470 [2024-11-19 00:09:38.017581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:77064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.470 [2024-11-19 00:09:38.017598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.470 [2024-11-19 00:09:38.017633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:77072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.470 [2024-11-19 00:09:38.017651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.470 [2024-11-19 00:09:38.017691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:77080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.470 [2024-11-19 00:09:38.017711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.470 [2024-11-19 00:09:38.017730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:77088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.470 [2024-11-19 00:09:38.017747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.470 [2024-11-19 00:09:38.017766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:77096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.470 [2024-11-19 00:09:38.017784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.470 [2024-11-19 00:09:38.017803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:77104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.470 [2024-11-19 00:09:38.017820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.470 [2024-11-19 00:09:38.017842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:77624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.470 [2024-11-19 00:09:38.017860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.470 [2024-11-19 00:09:38.017879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:77632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.470 [2024-11-19 00:09:38.017897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.471 [2024-11-19 00:09:38.017916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:77640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.471 [2024-11-19 00:09:38.017934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.471 [2024-11-19 00:09:38.017952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:77648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.471 [2024-11-19 00:09:38.017969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.471 [2024-11-19 00:09:38.018003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:77656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.471 [2024-11-19 00:09:38.018019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.471 [2024-11-19 00:09:38.018037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:77664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.471 [2024-11-19 00:09:38.018054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.471 [2024-11-19 00:09:38.018072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.471 [2024-11-19 00:09:38.018088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.471 [2024-11-19 00:09:38.018106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:77680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.471 [2024-11-19 00:09:38.018123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.471 [2024-11-19 00:09:38.018141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:77112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.471 [2024-11-19 00:09:38.018158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.471 [2024-11-19 00:09:38.018183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:77120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.471 [2024-11-19 00:09:38.018200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.471 [2024-11-19 00:09:38.018219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:77128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.471 [2024-11-19 00:09:38.018254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.471 [2024-11-19 00:09:38.018272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:77136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.471 [2024-11-19 00:09:38.018290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.471 [2024-11-19 00:09:38.018308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:77144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.471 [2024-11-19 00:09:38.018325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.471 [2024-11-19 00:09:38.018343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:77152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.471 [2024-11-19 00:09:38.018361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.471 [2024-11-19 00:09:38.018379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:77160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.471 [2024-11-19 00:09:38.018396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.471 [2024-11-19 00:09:38.018415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:77168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.471 [2024-11-19 00:09:38.018432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.471 [2024-11-19 00:09:38.018451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:77176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.471 [2024-11-19 00:09:38.018469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.471 [2024-11-19 00:09:38.018488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:77184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.471 [2024-11-19 00:09:38.018506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.471 [2024-11-19 00:09:38.018525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:77192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.471 [2024-11-19 00:09:38.018542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.471 [2024-11-19 00:09:38.018561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:77200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.471 [2024-11-19 00:09:38.018578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.471 [2024-11-19 00:09:38.018596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:77208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.471 [2024-11-19 00:09:38.018630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.471 [2024-11-19 00:09:38.018662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:77216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.471 [2024-11-19 00:09:38.018689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.471 [2024-11-19 00:09:38.018709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:77224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.471 [2024-11-19 00:09:38.018727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.471 [2024-11-19 00:09:38.018746] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002bf00 is same with the state(6) to be set 00:25:52.471 [2024-11-19 00:09:38.018768] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.471 [2024-11-19 00:09:38.018783] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.471 [2024-11-19 00:09:38.018799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77232 len:8 PRP1 0x0 PRP2 0x0 00:25:52.471 [2024-11-19 00:09:38.018823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.471 [2024-11-19 00:09:38.018843] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.471 [2024-11-19 00:09:38.018856] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.471 [2024-11-19 00:09:38.018870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77688 len:8 PRP1 0x0 PRP2 0x0 00:25:52.471 [2024-11-19 00:09:38.018887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.471 [2024-11-19 00:09:38.018903] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.471 [2024-11-19 00:09:38.018916] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.471 [2024-11-19 00:09:38.018930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77696 len:8 PRP1 0x0 PRP2 0x0 00:25:52.471 [2024-11-19 00:09:38.018946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.471 [2024-11-19 00:09:38.018963] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.471 [2024-11-19 00:09:38.018976] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.471 [2024-11-19 00:09:38.018989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77704 len:8 PRP1 0x0 PRP2 0x0 00:25:52.471 [2024-11-19 00:09:38.019020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.471 [2024-11-19 00:09:38.019036] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.471 [2024-11-19 00:09:38.019049] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.471 [2024-11-19 00:09:38.019062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77712 len:8 PRP1 0x0 PRP2 0x0 00:25:52.471 [2024-11-19 00:09:38.019078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.471 [2024-11-19 00:09:38.019094] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.471 [2024-11-19 00:09:38.019107] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.471 [2024-11-19 00:09:38.019120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77720 len:8 PRP1 0x0 PRP2 0x0 00:25:52.471 [2024-11-19 00:09:38.019136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.471 [2024-11-19 00:09:38.019152] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.471 [2024-11-19 00:09:38.019165] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.471 [2024-11-19 00:09:38.019184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77728 len:8 PRP1 0x0 PRP2 0x0 00:25:52.471 [2024-11-19 00:09:38.019201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.471 [2024-11-19 00:09:38.019217] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.471 [2024-11-19 00:09:38.019229] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.471 [2024-11-19 00:09:38.019243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77736 len:8 PRP1 0x0 PRP2 0x0 00:25:52.471 [2024-11-19 00:09:38.019258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.471 [2024-11-19 00:09:38.019274] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:52.471 [2024-11-19 00:09:38.019286] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:52.471 [2024-11-19 00:09:38.019300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77744 len:8 PRP1 0x0 PRP2 0x0 00:25:52.471 [2024-11-19 00:09:38.019315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.471 [2024-11-19 00:09:38.020912] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:52.471 [2024-11-19 00:09:38.021025] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.471 [2024-11-19 00:09:38.021054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.471 [2024-11-19 00:09:38.021109] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002b500 (9): Bad file descriptor 00:25:52.471 [2024-11-19 00:09:38.021570] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.472 [2024-11-19 00:09:38.021628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002b500 with addr=10.0.0.3, port=4421 00:25:52.472 [2024-11-19 00:09:38.021668] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b500 is same with the state(6) to be set 00:25:52.472 [2024-11-19 00:09:38.021715] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002b500 (9): Bad file descriptor 00:25:52.472 [2024-11-19 00:09:38.021771] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:52.472 [2024-11-19 00:09:38.021794] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:52.472 [2024-11-19 00:09:38.021813] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:52.472 [2024-11-19 00:09:38.021831] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:52.472 [2024-11-19 00:09:38.021850] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:52.472 5882.44 IOPS, 22.98 MiB/s [2024-11-19T00:09:59.164Z] 5928.43 IOPS, 23.16 MiB/s [2024-11-19T00:09:59.164Z] 5977.89 IOPS, 23.35 MiB/s [2024-11-19T00:09:59.164Z] 6028.51 IOPS, 23.55 MiB/s [2024-11-19T00:09:59.164Z] 6075.50 IOPS, 23.73 MiB/s [2024-11-19T00:09:59.164Z] 6118.44 IOPS, 23.90 MiB/s [2024-11-19T00:09:59.164Z] 6159.43 IOPS, 24.06 MiB/s [2024-11-19T00:09:59.164Z] 6194.05 IOPS, 24.20 MiB/s [2024-11-19T00:09:59.164Z] 6231.09 IOPS, 24.34 MiB/s [2024-11-19T00:09:59.164Z] 6265.96 IOPS, 24.48 MiB/s [2024-11-19T00:09:59.164Z] [2024-11-19 00:09:48.105420] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:25:52.472 6299.72 IOPS, 24.61 MiB/s [2024-11-19T00:09:59.164Z] 6334.02 IOPS, 24.74 MiB/s [2024-11-19T00:09:59.164Z] 6367.40 IOPS, 24.87 MiB/s [2024-11-19T00:09:59.164Z] 6401.69 IOPS, 25.01 MiB/s [2024-11-19T00:09:59.164Z] 6426.70 IOPS, 25.10 MiB/s [2024-11-19T00:09:59.164Z] 6454.96 IOPS, 25.21 MiB/s [2024-11-19T00:09:59.164Z] 6482.83 IOPS, 25.32 MiB/s [2024-11-19T00:09:59.164Z] 6510.55 IOPS, 25.43 MiB/s [2024-11-19T00:09:59.164Z] 6537.39 IOPS, 25.54 MiB/s [2024-11-19T00:09:59.164Z] 6562.24 IOPS, 25.63 MiB/s [2024-11-19T00:09:59.164Z] Received shutdown signal, test time was about 55.507305 seconds 00:25:52.472 00:25:52.472 Latency(us) 00:25:52.472 [2024-11-19T00:09:59.164Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:52.472 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:52.472 Verification LBA range: start 0x0 length 0x4000 00:25:52.472 Nvme0n1 : 55.51 6570.24 25.67 0.00 0.00 19457.46 409.60 7046430.72 00:25:52.472 [2024-11-19T00:09:59.164Z] =================================================================================================================== 00:25:52.472 [2024-11-19T00:09:59.164Z] Total : 6570.24 25.67 0.00 0.00 19457.46 409.60 7046430.72 00:25:52.472 00:09:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:52.730 00:09:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:25:52.730 00:09:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:25:52.730 00:09:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:25:52.730 00:09:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:52.730 00:09:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@121 -- # sync 00:25:52.730 00:09:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:52.730 00:09:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@124 -- # set +e 00:25:52.730 00:09:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:52.730 00:09:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:52.730 rmmod nvme_tcp 00:25:52.730 rmmod nvme_fabrics 00:25:52.730 rmmod nvme_keyring 00:25:52.730 00:09:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:52.730 00:09:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@128 -- # set -e 00:25:52.730 00:09:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@129 -- # return 0 00:25:52.730 00:09:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@517 -- # '[' -n 86909 ']' 00:25:52.730 00:09:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@518 -- # killprocess 86909 00:25:52.730 00:09:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 86909 ']' 00:25:52.730 00:09:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 86909 00:25:52.730 00:09:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 00:25:52.730 00:09:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:52.730 00:09:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86909 00:25:52.730 00:09:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:52.730 00:09:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:52.730 00:09:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86909' 00:25:52.730 killing process with pid 86909 00:25:52.730 00:09:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 86909 00:25:52.730 00:09:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 86909 00:25:53.667 00:10:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:53.667 00:10:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:53.667 00:10:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:53.667 00:10:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@297 -- # iptr 00:25:53.667 00:10:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-save 00:25:53.667 00:10:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:53.667 00:10:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:25:53.667 00:10:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:53.667 00:10:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:25:53.667 00:10:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:25:53.667 00:10:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:25:53.667 00:10:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:25:53.667 00:10:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:25:53.927 00:10:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:25:53.927 00:10:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:25:53.927 00:10:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:25:53.927 00:10:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:25:53.927 00:10:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:25:53.927 00:10:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:25:53.927 00:10:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:25:53.927 00:10:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:53.927 00:10:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:53.927 00:10:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:25:53.927 00:10:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:53.927 00:10:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:53.927 00:10:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:53.927 00:10:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@300 -- # return 0 00:25:53.927 00:25:53.927 real 1m3.187s 00:25:53.927 user 2m54.042s 00:25:53.927 sys 0m17.946s 00:25:53.927 00:10:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:53.927 00:10:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:25:53.927 ************************************ 00:25:53.927 END TEST nvmf_host_multipath 00:25:53.927 ************************************ 00:25:53.927 00:10:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@43 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:25:53.927 00:10:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:53.927 00:10:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:53.927 00:10:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.927 ************************************ 00:25:53.927 START TEST nvmf_timeout 00:25:53.927 ************************************ 00:25:53.927 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:25:54.188 * Looking for test storage... 00:25:54.188 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:25:54.188 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:54.188 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:54.188 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1693 -- # lcov --version 00:25:54.188 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:54.188 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:54.188 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:54.188 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:54.188 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:25:54.188 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:25:54.188 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:25:54.188 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:25:54.188 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:25:54.188 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:25:54.188 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:25:54.188 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:54.188 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@344 -- # case "$op" in 00:25:54.188 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@345 -- # : 1 00:25:54.188 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:54.188 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:54.188 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # decimal 1 00:25:54.188 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=1 00:25:54.188 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:54.188 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 1 00:25:54.188 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:25:54.188 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # decimal 2 00:25:54.188 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=2 00:25:54.188 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:54.188 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 2 00:25:54.188 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:25:54.188 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:54.188 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:54.188 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # return 0 00:25:54.188 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:54.188 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:54.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:54.188 --rc genhtml_branch_coverage=1 00:25:54.188 --rc genhtml_function_coverage=1 00:25:54.188 --rc genhtml_legend=1 00:25:54.188 --rc geninfo_all_blocks=1 00:25:54.188 --rc geninfo_unexecuted_blocks=1 00:25:54.188 00:25:54.188 ' 00:25:54.188 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:54.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:54.188 --rc genhtml_branch_coverage=1 00:25:54.188 --rc genhtml_function_coverage=1 00:25:54.188 --rc genhtml_legend=1 00:25:54.188 --rc geninfo_all_blocks=1 00:25:54.188 --rc geninfo_unexecuted_blocks=1 00:25:54.188 00:25:54.188 ' 00:25:54.188 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:54.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:54.188 --rc genhtml_branch_coverage=1 00:25:54.188 --rc genhtml_function_coverage=1 00:25:54.188 --rc genhtml_legend=1 00:25:54.188 --rc geninfo_all_blocks=1 00:25:54.188 --rc geninfo_unexecuted_blocks=1 00:25:54.188 00:25:54.188 ' 00:25:54.188 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:54.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:54.188 --rc genhtml_branch_coverage=1 00:25:54.188 --rc genhtml_function_coverage=1 00:25:54.188 --rc genhtml_legend=1 00:25:54.188 --rc geninfo_all_blocks=1 00:25:54.188 --rc geninfo_unexecuted_blocks=1 00:25:54.188 00:25:54.188 ' 00:25:54.188 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:54.188 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:25:54.188 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:54.188 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:54.188 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:54.188 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:54.188 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:54.188 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:54.188 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:54.188 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:54.188 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:54.188 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:54.188 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:25:54.188 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:25:54.188 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:54.188 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:54.188 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:54.188 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:54.188 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:54.188 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:25:54.188 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:54.188 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:54.188 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:54.188 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:54.188 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:54.188 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:54.188 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:25:54.188 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:54.188 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@51 -- # : 0 00:25:54.188 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:54.188 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:54.188 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:54.188 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:54.188 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:54.188 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:54.188 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:54.189 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:54.189 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:54.189 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:54.189 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:54.189 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:54.189 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:54.189 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:25:54.189 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:54.189 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:25:54.189 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:54.189 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:54.189 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:54.189 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:54.189 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:54.189 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:54.189 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:54.189 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:54.189 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:25:54.189 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:25:54.189 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:25:54.189 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:25:54.189 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:25:54.189 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@460 -- # nvmf_veth_init 00:25:54.189 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:54.189 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:25:54.189 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:25:54.189 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:25:54.189 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:54.189 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:25:54.189 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:54.189 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:25:54.189 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:54.189 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:25:54.189 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:54.189 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:54.189 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:54.189 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:54.189 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:54.189 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:54.189 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:25:54.189 Cannot find device "nvmf_init_br" 00:25:54.189 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:25:54.189 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:25:54.189 Cannot find device "nvmf_init_br2" 00:25:54.189 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:25:54.189 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:25:54.189 Cannot find device "nvmf_tgt_br" 00:25:54.189 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # true 00:25:54.189 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:25:54.189 Cannot find device "nvmf_tgt_br2" 00:25:54.189 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # true 00:25:54.189 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:25:54.449 Cannot find device "nvmf_init_br" 00:25:54.449 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # true 00:25:54.449 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:25:54.449 Cannot find device "nvmf_init_br2" 00:25:54.449 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # true 00:25:54.449 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:25:54.449 Cannot find device "nvmf_tgt_br" 00:25:54.449 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # true 00:25:54.449 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:25:54.449 Cannot find device "nvmf_tgt_br2" 00:25:54.449 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # true 00:25:54.449 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:25:54.449 Cannot find device "nvmf_br" 00:25:54.449 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # true 00:25:54.449 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:25:54.449 Cannot find device "nvmf_init_if" 00:25:54.449 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # true 00:25:54.449 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:25:54.449 Cannot find device "nvmf_init_if2" 00:25:54.449 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # true 00:25:54.449 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:54.449 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:54.449 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # true 00:25:54.449 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:54.449 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:54.449 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # true 00:25:54.449 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:25:54.449 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:54.449 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:25:54.449 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:54.449 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:54.449 00:10:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:54.449 00:10:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:54.449 00:10:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:54.449 00:10:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:25:54.449 00:10:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:25:54.449 00:10:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:25:54.449 00:10:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:25:54.449 00:10:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:25:54.449 00:10:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:25:54.449 00:10:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:25:54.449 00:10:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:25:54.449 00:10:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:25:54.449 00:10:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:54.449 00:10:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:54.449 00:10:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:54.449 00:10:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:25:54.449 00:10:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:25:54.449 00:10:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:25:54.449 00:10:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:25:54.449 00:10:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:54.449 00:10:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:54.709 00:10:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:54.709 00:10:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:25:54.709 00:10:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:25:54.709 00:10:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:25:54.709 00:10:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:54.709 00:10:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:25:54.709 00:10:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:25:54.709 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:54.709 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:25:54.709 00:25:54.709 --- 10.0.0.3 ping statistics --- 00:25:54.709 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:54.709 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:25:54.709 00:10:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:25:54.709 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:25:54.709 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.060 ms 00:25:54.709 00:25:54.709 --- 10.0.0.4 ping statistics --- 00:25:54.709 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:54.709 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:25:54.709 00:10:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:54.709 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:54.709 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:25:54.709 00:25:54.709 --- 10.0.0.1 ping statistics --- 00:25:54.709 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:54.709 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:25:54.709 00:10:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:25:54.709 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:54.709 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:25:54.709 00:25:54.709 --- 10.0.0.2 ping statistics --- 00:25:54.709 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:54.709 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:25:54.709 00:10:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:54.709 00:10:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@461 -- # return 0 00:25:54.709 00:10:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:54.709 00:10:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:54.709 00:10:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:54.709 00:10:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:54.709 00:10:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:54.709 00:10:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:54.709 00:10:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:54.709 00:10:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:25:54.709 00:10:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:54.709 00:10:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:54.709 00:10:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:54.709 00:10:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@509 -- # nvmfpid=88135 00:25:54.709 00:10:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@510 -- # waitforlisten 88135 00:25:54.709 00:10:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:25:54.709 00:10:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 88135 ']' 00:25:54.709 00:10:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:54.709 00:10:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:54.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:54.709 00:10:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:54.709 00:10:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:54.709 00:10:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:54.709 [2024-11-19 00:10:01.335521] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:25:54.709 [2024-11-19 00:10:01.335709] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:54.968 [2024-11-19 00:10:01.516231] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:54.968 [2024-11-19 00:10:01.597970] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:54.968 [2024-11-19 00:10:01.598030] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:54.968 [2024-11-19 00:10:01.598064] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:54.968 [2024-11-19 00:10:01.598087] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:54.968 [2024-11-19 00:10:01.598100] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:54.968 [2024-11-19 00:10:01.599709] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:54.968 [2024-11-19 00:10:01.599730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:55.227 [2024-11-19 00:10:01.747981] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:25:55.796 00:10:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:55.796 00:10:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:25:55.796 00:10:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:55.796 00:10:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:55.796 00:10:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:55.796 00:10:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:55.796 00:10:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:55.796 00:10:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:56.055 [2024-11-19 00:10:02.586375] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:56.055 00:10:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:56.315 Malloc0 00:25:56.315 00:10:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:56.584 00:10:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:56.859 00:10:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:25:57.118 [2024-11-19 00:10:03.555619] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:25:57.118 00:10:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:25:57.118 00:10:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=88190 00:25:57.118 00:10:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 88190 /var/tmp/bdevperf.sock 00:25:57.118 00:10:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 88190 ']' 00:25:57.118 00:10:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:57.118 00:10:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:57.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:57.118 00:10:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:57.118 00:10:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:57.118 00:10:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:57.118 [2024-11-19 00:10:03.652150] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:25:57.118 [2024-11-19 00:10:03.652292] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88190 ] 00:25:57.377 [2024-11-19 00:10:03.822833] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:57.377 [2024-11-19 00:10:03.945961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:57.636 [2024-11-19 00:10:04.131409] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:25:58.205 00:10:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:58.205 00:10:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:25:58.205 00:10:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:25:58.464 00:10:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:25:58.723 NVMe0n1 00:25:58.723 00:10:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=88208 00:25:58.723 00:10:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:58.723 00:10:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:25:58.723 Running I/O for 10 seconds... 00:25:59.660 00:10:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:25:59.922 6436.00 IOPS, 25.14 MiB/s [2024-11-19T00:10:06.614Z] [2024-11-19 00:10:06.533703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:58528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.922 [2024-11-19 00:10:06.533772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.922 [2024-11-19 00:10:06.533809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:58656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.922 [2024-11-19 00:10:06.533824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.922 [2024-11-19 00:10:06.533852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:58664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.922 [2024-11-19 00:10:06.533866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.922 [2024-11-19 00:10:06.533884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:58672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.922 [2024-11-19 00:10:06.533897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.922 [2024-11-19 00:10:06.533913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:58680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.922 [2024-11-19 00:10:06.533926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.922 [2024-11-19 00:10:06.533942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:58688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.922 [2024-11-19 00:10:06.533954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.922 [2024-11-19 00:10:06.533970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:58696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.922 [2024-11-19 00:10:06.533983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.922 [2024-11-19 00:10:06.533999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:58704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.922 [2024-11-19 00:10:06.534042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.922 [2024-11-19 00:10:06.534075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:58712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.922 [2024-11-19 00:10:06.534089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.922 [2024-11-19 00:10:06.534106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:58720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.922 [2024-11-19 00:10:06.534119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.922 [2024-11-19 00:10:06.534136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:58728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.922 [2024-11-19 00:10:06.534150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.922 [2024-11-19 00:10:06.534169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:58736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.922 [2024-11-19 00:10:06.534183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.922 [2024-11-19 00:10:06.534200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:58744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.922 [2024-11-19 00:10:06.534213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.922 [2024-11-19 00:10:06.534230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:58752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.922 [2024-11-19 00:10:06.534243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.922 [2024-11-19 00:10:06.534261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:58760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.922 [2024-11-19 00:10:06.534274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.922 [2024-11-19 00:10:06.534294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:58768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.922 [2024-11-19 00:10:06.534308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.922 [2024-11-19 00:10:06.534325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:58776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.922 [2024-11-19 00:10:06.534339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.922 [2024-11-19 00:10:06.534356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:58784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.922 [2024-11-19 00:10:06.534370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.922 [2024-11-19 00:10:06.534388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:58792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.922 [2024-11-19 00:10:06.534402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.922 [2024-11-19 00:10:06.534421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:58800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.922 [2024-11-19 00:10:06.534435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.922 [2024-11-19 00:10:06.534452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:58808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.922 [2024-11-19 00:10:06.534465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.922 [2024-11-19 00:10:06.534483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:58816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.922 [2024-11-19 00:10:06.534496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.922 [2024-11-19 00:10:06.534514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:58824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.922 [2024-11-19 00:10:06.534528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.922 [2024-11-19 00:10:06.534546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:58832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.922 [2024-11-19 00:10:06.534560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.922 [2024-11-19 00:10:06.534578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:58840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.922 [2024-11-19 00:10:06.534592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.922 [2024-11-19 00:10:06.534609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:58848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.922 [2024-11-19 00:10:06.534622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.922 [2024-11-19 00:10:06.534639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:58856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.922 [2024-11-19 00:10:06.534667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.922 [2024-11-19 00:10:06.534688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:58864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.922 [2024-11-19 00:10:06.534702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.922 [2024-11-19 00:10:06.534722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:58872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.922 [2024-11-19 00:10:06.534735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.922 [2024-11-19 00:10:06.534753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:58880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.922 [2024-11-19 00:10:06.534766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.922 [2024-11-19 00:10:06.534784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:58888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.922 [2024-11-19 00:10:06.534797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.922 [2024-11-19 00:10:06.534814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:58896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.922 [2024-11-19 00:10:06.534828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.922 [2024-11-19 00:10:06.534845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:58904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.922 [2024-11-19 00:10:06.534858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.922 [2024-11-19 00:10:06.534875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:58912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.922 [2024-11-19 00:10:06.534889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.923 [2024-11-19 00:10:06.534906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:58920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.923 [2024-11-19 00:10:06.534920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.923 [2024-11-19 00:10:06.534939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:58928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.923 [2024-11-19 00:10:06.534953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.923 [2024-11-19 00:10:06.534970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:58936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.923 [2024-11-19 00:10:06.534984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.923 [2024-11-19 00:10:06.535001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:58944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.923 [2024-11-19 00:10:06.535014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.923 [2024-11-19 00:10:06.535031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:58952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.923 [2024-11-19 00:10:06.535045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.923 [2024-11-19 00:10:06.535062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:58960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.923 [2024-11-19 00:10:06.535075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.923 [2024-11-19 00:10:06.535093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:58968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.923 [2024-11-19 00:10:06.535107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.923 [2024-11-19 00:10:06.535124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:58976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.923 [2024-11-19 00:10:06.535138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.923 [2024-11-19 00:10:06.535162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:58984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.923 [2024-11-19 00:10:06.535176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.923 [2024-11-19 00:10:06.535196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:58992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.923 [2024-11-19 00:10:06.535209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.923 [2024-11-19 00:10:06.535226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:59000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.923 [2024-11-19 00:10:06.535239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.923 [2024-11-19 00:10:06.535256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:59008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.923 [2024-11-19 00:10:06.535270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.923 [2024-11-19 00:10:06.535303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:59016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.923 [2024-11-19 00:10:06.535317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.923 [2024-11-19 00:10:06.535334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:59024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.923 [2024-11-19 00:10:06.535348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.923 [2024-11-19 00:10:06.535365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:59032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.923 [2024-11-19 00:10:06.535378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.923 [2024-11-19 00:10:06.535396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:59040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.923 [2024-11-19 00:10:06.535409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.923 [2024-11-19 00:10:06.535427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:59048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.923 [2024-11-19 00:10:06.535441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.923 [2024-11-19 00:10:06.535460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:59056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.923 [2024-11-19 00:10:06.535474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.923 [2024-11-19 00:10:06.535491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:59064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.923 [2024-11-19 00:10:06.535504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.923 [2024-11-19 00:10:06.535521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:59072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.923 [2024-11-19 00:10:06.535535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.923 [2024-11-19 00:10:06.535560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:59080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.923 [2024-11-19 00:10:06.535574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.923 [2024-11-19 00:10:06.535592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:59088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.923 [2024-11-19 00:10:06.535618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.923 [2024-11-19 00:10:06.535638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:59096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.923 [2024-11-19 00:10:06.535652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.923 [2024-11-19 00:10:06.535669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:59104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.923 [2024-11-19 00:10:06.535682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.923 [2024-11-19 00:10:06.535717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:59112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.923 [2024-11-19 00:10:06.535732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.923 [2024-11-19 00:10:06.535752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:59120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.923 [2024-11-19 00:10:06.535766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.923 [2024-11-19 00:10:06.535784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:59128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.923 [2024-11-19 00:10:06.535798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.923 [2024-11-19 00:10:06.535815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:59136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.923 [2024-11-19 00:10:06.535828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.923 [2024-11-19 00:10:06.535846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:59144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.923 [2024-11-19 00:10:06.535859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.923 [2024-11-19 00:10:06.535876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:59152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.923 [2024-11-19 00:10:06.535889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.923 [2024-11-19 00:10:06.535907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:59160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.923 [2024-11-19 00:10:06.535920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.923 [2024-11-19 00:10:06.535938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:59168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.923 [2024-11-19 00:10:06.535951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.924 [2024-11-19 00:10:06.535970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:59176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.924 [2024-11-19 00:10:06.535984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.924 [2024-11-19 00:10:06.536006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:59184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.924 [2024-11-19 00:10:06.536020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.924 [2024-11-19 00:10:06.536037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:59192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.924 [2024-11-19 00:10:06.536051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.924 [2024-11-19 00:10:06.536068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:59200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.924 [2024-11-19 00:10:06.536082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.924 [2024-11-19 00:10:06.536098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:59208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.924 [2024-11-19 00:10:06.536112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.924 [2024-11-19 00:10:06.536129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:59216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.924 [2024-11-19 00:10:06.536143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.924 [2024-11-19 00:10:06.536160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:59224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.924 [2024-11-19 00:10:06.536173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.924 [2024-11-19 00:10:06.536190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:59232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.924 [2024-11-19 00:10:06.536204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.924 [2024-11-19 00:10:06.536222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:59240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.924 [2024-11-19 00:10:06.536236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.924 [2024-11-19 00:10:06.536255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:59248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.924 [2024-11-19 00:10:06.536269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.924 [2024-11-19 00:10:06.536286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:59256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.924 [2024-11-19 00:10:06.536299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.924 [2024-11-19 00:10:06.536342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:59264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.924 [2024-11-19 00:10:06.536359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.924 [2024-11-19 00:10:06.536377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:59272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.924 [2024-11-19 00:10:06.536391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.924 [2024-11-19 00:10:06.536408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:59280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.924 [2024-11-19 00:10:06.536422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.924 [2024-11-19 00:10:06.536440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:59288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.924 [2024-11-19 00:10:06.536453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.924 [2024-11-19 00:10:06.536473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:59296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.924 [2024-11-19 00:10:06.536487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.924 [2024-11-19 00:10:06.536505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:59304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.924 [2024-11-19 00:10:06.536520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.924 [2024-11-19 00:10:06.536539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:59312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.924 [2024-11-19 00:10:06.536553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.924 [2024-11-19 00:10:06.536571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:59320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.924 [2024-11-19 00:10:06.536584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.924 [2024-11-19 00:10:06.536602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:59328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.924 [2024-11-19 00:10:06.536627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.924 [2024-11-19 00:10:06.536664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:59336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.924 [2024-11-19 00:10:06.536693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.924 [2024-11-19 00:10:06.536712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:59344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.924 [2024-11-19 00:10:06.536726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.924 [2024-11-19 00:10:06.536744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:59352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.924 [2024-11-19 00:10:06.536758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.924 [2024-11-19 00:10:06.536775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:59360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.924 [2024-11-19 00:10:06.536789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.924 [2024-11-19 00:10:06.536825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:59368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.924 [2024-11-19 00:10:06.536839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.924 [2024-11-19 00:10:06.536859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:59376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.924 [2024-11-19 00:10:06.536873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.924 [2024-11-19 00:10:06.536891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:59384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.924 [2024-11-19 00:10:06.536906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.924 [2024-11-19 00:10:06.536924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:59392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.924 [2024-11-19 00:10:06.536938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.924 [2024-11-19 00:10:06.536958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:59400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.924 [2024-11-19 00:10:06.536973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.924 [2024-11-19 00:10:06.536991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:59408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.924 [2024-11-19 00:10:06.537021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.924 [2024-11-19 00:10:06.537039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:59416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.924 [2024-11-19 00:10:06.537052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.924 [2024-11-19 00:10:06.537069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:59424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.924 [2024-11-19 00:10:06.537083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.924 [2024-11-19 00:10:06.537102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:59432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.924 [2024-11-19 00:10:06.537116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.924 [2024-11-19 00:10:06.537150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:59440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.924 [2024-11-19 00:10:06.537164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.924 [2024-11-19 00:10:06.537180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:59448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.925 [2024-11-19 00:10:06.537194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.925 [2024-11-19 00:10:06.537211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:59456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.925 [2024-11-19 00:10:06.537224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.925 [2024-11-19 00:10:06.537242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:59464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.925 [2024-11-19 00:10:06.537255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.925 [2024-11-19 00:10:06.537272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:59472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.925 [2024-11-19 00:10:06.537285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.925 [2024-11-19 00:10:06.537302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:59480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.925 [2024-11-19 00:10:06.537316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.925 [2024-11-19 00:10:06.537332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:59488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.925 [2024-11-19 00:10:06.537346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.925 [2024-11-19 00:10:06.537364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:59496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.925 [2024-11-19 00:10:06.537377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.925 [2024-11-19 00:10:06.537399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:59504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.925 [2024-11-19 00:10:06.537413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.925 [2024-11-19 00:10:06.537431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:59512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.925 [2024-11-19 00:10:06.537444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.925 [2024-11-19 00:10:06.537461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:59520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.925 [2024-11-19 00:10:06.537475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.925 [2024-11-19 00:10:06.537505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:59528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.925 [2024-11-19 00:10:06.537519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.925 [2024-11-19 00:10:06.537537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:58536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.925 [2024-11-19 00:10:06.537551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.925 [2024-11-19 00:10:06.537568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:58544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.925 [2024-11-19 00:10:06.537581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.925 [2024-11-19 00:10:06.537599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:58552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.925 [2024-11-19 00:10:06.537612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.925 [2024-11-19 00:10:06.537630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:58560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.925 [2024-11-19 00:10:06.537643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.925 [2024-11-19 00:10:06.537662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:58568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.925 [2024-11-19 00:10:06.537710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.925 [2024-11-19 00:10:06.537732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:58576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.925 [2024-11-19 00:10:06.537746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.925 [2024-11-19 00:10:06.537764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:58584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.925 [2024-11-19 00:10:06.537777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.925 [2024-11-19 00:10:06.537795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:58592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.925 [2024-11-19 00:10:06.537809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.925 [2024-11-19 00:10:06.537827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:58600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.925 [2024-11-19 00:10:06.537840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.925 [2024-11-19 00:10:06.537860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:58608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.925 [2024-11-19 00:10:06.537874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.925 [2024-11-19 00:10:06.537892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:58616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.925 [2024-11-19 00:10:06.537906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.925 [2024-11-19 00:10:06.537924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:58624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.925 [2024-11-19 00:10:06.537938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.925 [2024-11-19 00:10:06.537957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:58632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.925 [2024-11-19 00:10:06.537971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.925 [2024-11-19 00:10:06.537989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:58640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.925 [2024-11-19 00:10:06.538003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.925 [2024-11-19 00:10:06.538036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:58648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.925 [2024-11-19 00:10:06.538049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.925 [2024-11-19 00:10:06.538067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:59536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.925 [2024-11-19 00:10:06.538080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.925 [2024-11-19 00:10:06.538097] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b280 is same with the state(6) to be set 00:25:59.925 [2024-11-19 00:10:06.538115] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:59.925 [2024-11-19 00:10:06.538130] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:59.925 [2024-11-19 00:10:06.538143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:59544 len:8 PRP1 0x0 PRP2 0x0 00:25:59.925 [2024-11-19 00:10:06.538158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.925 [2024-11-19 00:10:06.538489] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:59.925 [2024-11-19 00:10:06.538521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.925 [2024-11-19 00:10:06.538559] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:59.925 [2024-11-19 00:10:06.538574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.925 [2024-11-19 00:10:06.538589] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:59.925 [2024-11-19 00:10:06.538602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.925 [2024-11-19 00:10:06.538617] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:59.925 [2024-11-19 00:10:06.538646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.925 [2024-11-19 00:10:06.538690] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(6) to be set 00:25:59.925 [2024-11-19 00:10:06.538959] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:25:59.926 [2024-11-19 00:10:06.539013] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:25:59.926 [2024-11-19 00:10:06.539143] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.926 [2024-11-19 00:10:06.539175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.3, port=4420 00:25:59.926 [2024-11-19 00:10:06.539194] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(6) to be set 00:25:59.926 [2024-11-19 00:10:06.539220] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:25:59.926 [2024-11-19 00:10:06.539246] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:25:59.926 [2024-11-19 00:10:06.539260] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:25:59.926 [2024-11-19 00:10:06.539279] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:25:59.926 [2024-11-19 00:10:06.539295] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:25:59.926 [2024-11-19 00:10:06.539312] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:25:59.926 00:10:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:26:01.799 3658.00 IOPS, 14.29 MiB/s [2024-11-19T00:10:08.750Z] 2438.67 IOPS, 9.53 MiB/s [2024-11-19T00:10:08.750Z] [2024-11-19 00:10:08.539449] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.058 [2024-11-19 00:10:08.539519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.3, port=4420 00:26:02.058 [2024-11-19 00:10:08.539543] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(6) to be set 00:26:02.058 [2024-11-19 00:10:08.539574] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:26:02.058 [2024-11-19 00:10:08.539601] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:26:02.058 [2024-11-19 00:10:08.539666] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:26:02.058 [2024-11-19 00:10:08.539685] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:26:02.058 [2024-11-19 00:10:08.539702] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:26:02.059 [2024-11-19 00:10:08.539719] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:26:02.059 00:10:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:26:02.059 00:10:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:02.059 00:10:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:26:02.318 00:10:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:26:02.318 00:10:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:26:02.318 00:10:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:26:02.318 00:10:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:26:02.577 00:10:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:26:02.577 00:10:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:26:03.772 1829.00 IOPS, 7.14 MiB/s [2024-11-19T00:10:10.757Z] 1463.20 IOPS, 5.72 MiB/s [2024-11-19T00:10:10.757Z] [2024-11-19 00:10:10.539889] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.065 [2024-11-19 00:10:10.539945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.3, port=4420 00:26:04.065 [2024-11-19 00:10:10.539971] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(6) to be set 00:26:04.065 [2024-11-19 00:10:10.540003] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:26:04.065 [2024-11-19 00:10:10.540032] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:26:04.065 [2024-11-19 00:10:10.540044] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:26:04.065 [2024-11-19 00:10:10.540060] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:26:04.065 [2024-11-19 00:10:10.540075] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:26:04.065 [2024-11-19 00:10:10.540091] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:26:05.938 1219.33 IOPS, 4.76 MiB/s [2024-11-19T00:10:12.630Z] 1045.14 IOPS, 4.08 MiB/s [2024-11-19T00:10:12.630Z] [2024-11-19 00:10:12.540149] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:26:05.938 [2024-11-19 00:10:12.540402] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:26:05.938 [2024-11-19 00:10:12.540430] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:26:05.938 [2024-11-19 00:10:12.540449] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] already in failed state 00:26:05.938 [2024-11-19 00:10:12.540467] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:26:06.874 914.50 IOPS, 3.57 MiB/s 00:26:06.874 Latency(us) 00:26:06.874 [2024-11-19T00:10:13.566Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:06.874 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:06.874 Verification LBA range: start 0x0 length 0x4000 00:26:06.874 NVMe0n1 : 8.14 898.91 3.51 15.73 0.00 139733.96 4110.89 7015926.69 00:26:06.874 [2024-11-19T00:10:13.566Z] =================================================================================================================== 00:26:06.874 [2024-11-19T00:10:13.566Z] Total : 898.91 3.51 15.73 0.00 139733.96 4110.89 7015926.69 00:26:06.874 { 00:26:06.874 "results": [ 00:26:06.874 { 00:26:06.874 "job": "NVMe0n1", 00:26:06.874 "core_mask": "0x4", 00:26:06.874 "workload": "verify", 00:26:06.874 "status": "finished", 00:26:06.874 "verify_range": { 00:26:06.874 "start": 0, 00:26:06.874 "length": 16384 00:26:06.874 }, 00:26:06.874 "queue_depth": 128, 00:26:06.874 "io_size": 4096, 00:26:06.874 "runtime": 8.138715, 00:26:06.874 "iops": 898.9134034058202, 00:26:06.874 "mibps": 3.511380482053985, 00:26:06.874 "io_failed": 128, 00:26:06.874 "io_timeout": 0, 00:26:06.874 "avg_latency_us": 139733.96252650092, 00:26:06.874 "min_latency_us": 4110.894545454546, 00:26:06.874 "max_latency_us": 7015926.69090909 00:26:06.874 } 00:26:06.874 ], 00:26:06.874 "core_count": 1 00:26:06.874 } 00:26:07.440 00:10:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:26:07.440 00:10:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:07.440 00:10:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:26:07.698 00:10:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:26:07.698 00:10:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:26:07.698 00:10:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:26:07.698 00:10:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:26:07.956 00:10:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:26:07.956 00:10:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@65 -- # wait 88208 00:26:07.956 00:10:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 88190 00:26:07.956 00:10:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 88190 ']' 00:26:07.956 00:10:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 88190 00:26:07.956 00:10:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:26:07.956 00:10:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:07.956 00:10:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88190 00:26:07.956 killing process with pid 88190 00:26:07.956 Received shutdown signal, test time was about 9.193031 seconds 00:26:07.956 00:26:07.956 Latency(us) 00:26:07.956 [2024-11-19T00:10:14.648Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:07.956 [2024-11-19T00:10:14.648Z] =================================================================================================================== 00:26:07.956 [2024-11-19T00:10:14.648Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:07.956 00:10:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:26:07.956 00:10:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:26:07.956 00:10:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88190' 00:26:07.956 00:10:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 88190 00:26:07.956 00:10:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 88190 00:26:08.891 00:10:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:26:09.150 [2024-11-19 00:10:15.673122] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:26:09.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:09.150 00:10:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=88338 00:26:09.150 00:10:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:26:09.150 00:10:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 88338 /var/tmp/bdevperf.sock 00:26:09.150 00:10:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 88338 ']' 00:26:09.150 00:10:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:09.150 00:10:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:09.150 00:10:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:09.150 00:10:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:09.150 00:10:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:09.150 [2024-11-19 00:10:15.786417] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:26:09.150 [2024-11-19 00:10:15.786845] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88338 ] 00:26:09.409 [2024-11-19 00:10:15.952585] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:09.409 [2024-11-19 00:10:16.041694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:09.668 [2024-11-19 00:10:16.195449] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:26:10.237 00:10:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:10.237 00:10:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:26:10.237 00:10:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:26:10.496 00:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:26:10.754 NVMe0n1 00:26:10.754 00:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=88360 00:26:10.754 00:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:10.754 00:10:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:26:10.754 Running I/O for 10 seconds... 00:26:11.691 00:10:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:26:11.953 6549.00 IOPS, 25.58 MiB/s [2024-11-19T00:10:18.645Z] [2024-11-19 00:10:18.558388] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:11.953 [2024-11-19 00:10:18.558449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:11.953 [2024-11-19 00:10:18.558464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:11.953 [2024-11-19 00:10:18.558477] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:11.953 [2024-11-19 00:10:18.558487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:11.953 [2024-11-19 00:10:18.558498] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:11.953 [2024-11-19 00:10:18.558508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:11.953 [2024-11-19 00:10:18.558520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:11.953 [2024-11-19 00:10:18.558531] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:11.953 [2024-11-19 00:10:18.558547] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:11.953 [2024-11-19 00:10:18.558558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:11.953 [2024-11-19 00:10:18.558569] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:11.953 [2024-11-19 00:10:18.558580] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:11.953 [2024-11-19 00:10:18.558591] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:11.954 [2024-11-19 00:10:18.558640] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:11.954 [2024-11-19 00:10:18.558654] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:11.954 [2024-11-19 00:10:18.558666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:11.954 [2024-11-19 00:10:18.558678] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:11.954 [2024-11-19 00:10:18.558688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:11.954 [2024-11-19 00:10:18.558700] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:11.954 [2024-11-19 00:10:18.558711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:11.954 [2024-11-19 00:10:18.558739] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:11.954 [2024-11-19 00:10:18.558749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:11.954 [2024-11-19 00:10:18.558762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:11.954 [2024-11-19 00:10:18.558772] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:11.954 [2024-11-19 00:10:18.558786] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:11.954 [2024-11-19 00:10:18.558797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:11.954 [2024-11-19 00:10:18.558816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:11.954 [2024-11-19 00:10:18.558827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:11.954 [2024-11-19 00:10:18.558839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:11.954 [2024-11-19 00:10:18.558850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:11.954 [2024-11-19 00:10:18.558862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:11.954 [2024-11-19 00:10:18.558872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:11.954 [2024-11-19 00:10:18.558884] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:11.954 [2024-11-19 00:10:18.558896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:11.954 [2024-11-19 00:10:18.558908] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:11.954 [2024-11-19 00:10:18.558918] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:11.954 [2024-11-19 00:10:18.558932] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:11.954 [2024-11-19 00:10:18.558942] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:11.954 [2024-11-19 00:10:18.558954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:11.954 [2024-11-19 00:10:18.558964] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:11.954 [2024-11-19 00:10:18.558978] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:11.954 [2024-11-19 00:10:18.558989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:11.954 [2024-11-19 00:10:18.559032] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:11.954 [2024-11-19 00:10:18.559043] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:11.954 [2024-11-19 00:10:18.559071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:11.954 [2024-11-19 00:10:18.559081] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:11.954 [2024-11-19 00:10:18.559123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:11.954 [2024-11-19 00:10:18.559135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:11.954 [2024-11-19 00:10:18.559148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:11.954 [2024-11-19 00:10:18.559159] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:11.954 [2024-11-19 00:10:18.559172] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:11.954 [2024-11-19 00:10:18.559183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:11.954 [2024-11-19 00:10:18.559196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:11.954 [2024-11-19 00:10:18.559208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:11.954 [2024-11-19 00:10:18.559221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:11.954 [2024-11-19 00:10:18.559232] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:11.954 [2024-11-19 00:10:18.559247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:11.954 [2024-11-19 00:10:18.559258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:11.954 [2024-11-19 00:10:18.559271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:11.954 [2024-11-19 00:10:18.559282] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:11.954 [2024-11-19 00:10:18.559296] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:11.954 [2024-11-19 00:10:18.559307] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:11.954 [2024-11-19 00:10:18.559320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:11.954 [2024-11-19 00:10:18.559331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:11.954 [2024-11-19 00:10:18.559344] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:11.954 [2024-11-19 00:10:18.559356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:11.954 [2024-11-19 00:10:18.559369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:11.954 [2024-11-19 00:10:18.559380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:11.954 [2024-11-19 00:10:18.559393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:11.954 [2024-11-19 00:10:18.559404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:11.954 [2024-11-19 00:10:18.559417] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:11.954 [2024-11-19 00:10:18.559427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:11.954 [2024-11-19 00:10:18.559442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:11.954 [2024-11-19 00:10:18.559470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:11.954 [2024-11-19 00:10:18.559483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:11.954 [2024-11-19 00:10:18.559494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:11.954 [2024-11-19 00:10:18.559507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:11.955 [2024-11-19 00:10:18.559519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:11.955 [2024-11-19 00:10:18.559532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:11.955 [2024-11-19 00:10:18.559544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:11.955 [2024-11-19 00:10:18.559557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:11.955 [2024-11-19 00:10:18.559568] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:11.955 [2024-11-19 00:10:18.559581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:11.955 [2024-11-19 00:10:18.559593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:11.955 [2024-11-19 00:10:18.559606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:11.955 [2024-11-19 00:10:18.559617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:11.955 [2024-11-19 00:10:18.559630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:11.955 [2024-11-19 00:10:18.559642] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:11.955 [2024-11-19 00:10:18.559659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:26:11.955 [2024-11-19 00:10:18.559757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:58808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.955 [2024-11-19 00:10:18.559815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.955 [2024-11-19 00:10:18.559848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:58816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.955 [2024-11-19 00:10:18.559867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.955 [2024-11-19 00:10:18.559885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:58824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.955 [2024-11-19 00:10:18.559901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.955 [2024-11-19 00:10:18.559919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:58832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.955 [2024-11-19 00:10:18.559938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.955 [2024-11-19 00:10:18.559955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:58840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.955 [2024-11-19 00:10:18.559971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.955 [2024-11-19 00:10:18.559987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:58848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.955 [2024-11-19 00:10:18.560007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.955 [2024-11-19 00:10:18.560023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:58856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.955 [2024-11-19 00:10:18.560041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.955 [2024-11-19 00:10:18.560057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:58864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.955 [2024-11-19 00:10:18.560074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.955 [2024-11-19 00:10:18.560090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:58872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.955 [2024-11-19 00:10:18.560106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.955 [2024-11-19 00:10:18.560122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:58880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.955 [2024-11-19 00:10:18.560138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.955 [2024-11-19 00:10:18.560155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:58888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.955 [2024-11-19 00:10:18.560171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.955 [2024-11-19 00:10:18.560187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:58896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.955 [2024-11-19 00:10:18.560205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.955 [2024-11-19 00:10:18.560221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:58904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.955 [2024-11-19 00:10:18.560237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.955 [2024-11-19 00:10:18.560253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:58912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.955 [2024-11-19 00:10:18.560270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.955 [2024-11-19 00:10:18.560286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:58920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.955 [2024-11-19 00:10:18.560303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.955 [2024-11-19 00:10:18.560345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:58928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.955 [2024-11-19 00:10:18.560382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.955 [2024-11-19 00:10:18.560399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:58936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.955 [2024-11-19 00:10:18.560417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.955 [2024-11-19 00:10:18.560435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:58944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.955 [2024-11-19 00:10:18.560452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.955 [2024-11-19 00:10:18.560469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:58952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.955 [2024-11-19 00:10:18.560486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.955 [2024-11-19 00:10:18.560503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:58960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.955 [2024-11-19 00:10:18.560524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.955 [2024-11-19 00:10:18.560542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:58968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.955 [2024-11-19 00:10:18.560559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.955 [2024-11-19 00:10:18.560576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:58976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.955 [2024-11-19 00:10:18.560593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.955 [2024-11-19 00:10:18.560610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:58984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.955 [2024-11-19 00:10:18.560639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.955 [2024-11-19 00:10:18.560661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:58992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.955 [2024-11-19 00:10:18.560679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.955 [2024-11-19 00:10:18.560696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:59000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.955 [2024-11-19 00:10:18.560714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.955 [2024-11-19 00:10:18.560732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:59008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.956 [2024-11-19 00:10:18.560763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.956 [2024-11-19 00:10:18.560780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:59016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.956 [2024-11-19 00:10:18.560796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.956 [2024-11-19 00:10:18.560813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:59024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.956 [2024-11-19 00:10:18.560846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.956 [2024-11-19 00:10:18.560862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:59032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.956 [2024-11-19 00:10:18.560878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.956 [2024-11-19 00:10:18.560895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:59040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.956 [2024-11-19 00:10:18.560911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.956 [2024-11-19 00:10:18.560927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:59048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.956 [2024-11-19 00:10:18.560943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.956 [2024-11-19 00:10:18.560959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:59056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.956 [2024-11-19 00:10:18.560975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.956 [2024-11-19 00:10:18.560991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:59064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.956 [2024-11-19 00:10:18.561009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.956 [2024-11-19 00:10:18.561026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:59072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.956 [2024-11-19 00:10:18.561042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.956 [2024-11-19 00:10:18.561058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:59080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.956 [2024-11-19 00:10:18.561074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.956 [2024-11-19 00:10:18.561091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:59088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.956 [2024-11-19 00:10:18.561109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.956 [2024-11-19 00:10:18.561125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:59096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.956 [2024-11-19 00:10:18.561141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.956 [2024-11-19 00:10:18.561157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:59104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.956 [2024-11-19 00:10:18.561173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.956 [2024-11-19 00:10:18.561189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:59112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.956 [2024-11-19 00:10:18.561205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.956 [2024-11-19 00:10:18.561222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:59120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.956 [2024-11-19 00:10:18.561238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.956 [2024-11-19 00:10:18.561254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:59128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.956 [2024-11-19 00:10:18.561271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.956 [2024-11-19 00:10:18.561287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:59136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.956 [2024-11-19 00:10:18.561303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.956 [2024-11-19 00:10:18.561320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:59144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.956 [2024-11-19 00:10:18.561336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.956 [2024-11-19 00:10:18.561353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:59152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.956 [2024-11-19 00:10:18.561371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.956 [2024-11-19 00:10:18.561387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:59160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.956 [2024-11-19 00:10:18.561403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.956 [2024-11-19 00:10:18.561419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:59168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.956 [2024-11-19 00:10:18.561437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.956 [2024-11-19 00:10:18.561468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:59176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.956 [2024-11-19 00:10:18.561486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.956 [2024-11-19 00:10:18.561503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:59184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.956 [2024-11-19 00:10:18.561519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.956 [2024-11-19 00:10:18.561535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:59192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.956 [2024-11-19 00:10:18.561552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.956 [2024-11-19 00:10:18.561568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:59200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.956 [2024-11-19 00:10:18.561584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.956 [2024-11-19 00:10:18.561600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:59208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.956 [2024-11-19 00:10:18.561633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.956 [2024-11-19 00:10:18.561664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:59216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.956 [2024-11-19 00:10:18.561685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.956 [2024-11-19 00:10:18.561702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:59224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.956 [2024-11-19 00:10:18.561718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.956 [2024-11-19 00:10:18.561735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:59232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.956 [2024-11-19 00:10:18.561760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.956 [2024-11-19 00:10:18.561777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:59240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.956 [2024-11-19 00:10:18.561793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.956 [2024-11-19 00:10:18.561810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:59248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.956 [2024-11-19 00:10:18.561826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.956 [2024-11-19 00:10:18.561843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:59256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.956 [2024-11-19 00:10:18.561866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.956 [2024-11-19 00:10:18.561883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:59264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.956 [2024-11-19 00:10:18.561900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.956 [2024-11-19 00:10:18.561921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:59272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.956 [2024-11-19 00:10:18.561940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.956 [2024-11-19 00:10:18.561957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:59280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.956 [2024-11-19 00:10:18.561975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.956 [2024-11-19 00:10:18.561992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:59288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.956 [2024-11-19 00:10:18.562023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.956 [2024-11-19 00:10:18.562039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:59296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.957 [2024-11-19 00:10:18.562055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.957 [2024-11-19 00:10:18.562071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:59304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.957 [2024-11-19 00:10:18.562087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.957 [2024-11-19 00:10:18.562103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:59312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.957 [2024-11-19 00:10:18.562119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.957 [2024-11-19 00:10:18.562135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:59320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.957 [2024-11-19 00:10:18.562152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.957 [2024-11-19 00:10:18.562168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:59328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.957 [2024-11-19 00:10:18.562184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.957 [2024-11-19 00:10:18.562199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:59336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.957 [2024-11-19 00:10:18.562215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.957 [2024-11-19 00:10:18.562232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:59344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.957 [2024-11-19 00:10:18.562249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.957 [2024-11-19 00:10:18.562266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:59352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.957 [2024-11-19 00:10:18.562282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.957 [2024-11-19 00:10:18.562298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:59360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.957 [2024-11-19 00:10:18.562314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.957 [2024-11-19 00:10:18.562330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:59368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.957 [2024-11-19 00:10:18.562346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.957 [2024-11-19 00:10:18.562362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:59376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.957 [2024-11-19 00:10:18.562380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.957 [2024-11-19 00:10:18.562396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:59384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.957 [2024-11-19 00:10:18.562414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.957 [2024-11-19 00:10:18.562431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:59392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.957 [2024-11-19 00:10:18.562446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.957 [2024-11-19 00:10:18.562462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:59400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.957 [2024-11-19 00:10:18.562478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.957 [2024-11-19 00:10:18.562494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:59408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.957 [2024-11-19 00:10:18.562512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.957 [2024-11-19 00:10:18.562528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:59416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.957 [2024-11-19 00:10:18.562543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.957 [2024-11-19 00:10:18.562560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:59424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.957 [2024-11-19 00:10:18.562575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.957 [2024-11-19 00:10:18.562591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:59432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.957 [2024-11-19 00:10:18.562607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.957 [2024-11-19 00:10:18.562636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:59440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.957 [2024-11-19 00:10:18.562652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.957 [2024-11-19 00:10:18.562669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:59448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.957 [2024-11-19 00:10:18.562689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.957 [2024-11-19 00:10:18.562706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:59456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.957 [2024-11-19 00:10:18.562721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.957 [2024-11-19 00:10:18.562737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:59464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.957 [2024-11-19 00:10:18.562753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.957 [2024-11-19 00:10:18.562770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:59472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.957 [2024-11-19 00:10:18.562788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.957 [2024-11-19 00:10:18.562804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:59480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.957 [2024-11-19 00:10:18.562822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.957 [2024-11-19 00:10:18.562838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:59488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.957 [2024-11-19 00:10:18.562854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.957 [2024-11-19 00:10:18.562871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:59496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.957 [2024-11-19 00:10:18.562887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.957 [2024-11-19 00:10:18.562903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:59504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.957 [2024-11-19 00:10:18.562919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.957 [2024-11-19 00:10:18.562935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:59512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.957 [2024-11-19 00:10:18.562952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.957 [2024-11-19 00:10:18.562968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:59520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.957 [2024-11-19 00:10:18.562984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.957 [2024-11-19 00:10:18.563000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:59528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.957 [2024-11-19 00:10:18.563016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.957 [2024-11-19 00:10:18.563033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:59536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.957 [2024-11-19 00:10:18.563050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.957 [2024-11-19 00:10:18.563067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:59560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.957 [2024-11-19 00:10:18.563083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.957 [2024-11-19 00:10:18.563100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:59568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.957 [2024-11-19 00:10:18.563116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.957 [2024-11-19 00:10:18.563132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:59576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.957 [2024-11-19 00:10:18.563148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.957 [2024-11-19 00:10:18.563164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:59584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.957 [2024-11-19 00:10:18.563180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.957 [2024-11-19 00:10:18.563197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:59592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.957 [2024-11-19 00:10:18.563215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.957 [2024-11-19 00:10:18.563231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:59600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.958 [2024-11-19 00:10:18.563247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.958 [2024-11-19 00:10:18.563264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:59608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.958 [2024-11-19 00:10:18.563281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.958 [2024-11-19 00:10:18.563298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:59616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.958 [2024-11-19 00:10:18.563315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.958 [2024-11-19 00:10:18.563332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:59624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.958 [2024-11-19 00:10:18.563347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.958 [2024-11-19 00:10:18.563364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:59632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.958 [2024-11-19 00:10:18.563380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.958 [2024-11-19 00:10:18.563396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:59640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.958 [2024-11-19 00:10:18.563412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.958 [2024-11-19 00:10:18.563428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:59648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.958 [2024-11-19 00:10:18.563444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.958 [2024-11-19 00:10:18.563460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:59656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.958 [2024-11-19 00:10:18.563476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.958 [2024-11-19 00:10:18.563492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:59664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.958 [2024-11-19 00:10:18.563508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.958 [2024-11-19 00:10:18.563525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:59672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.958 [2024-11-19 00:10:18.563541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.958 [2024-11-19 00:10:18.563556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:59544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.958 [2024-11-19 00:10:18.563574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.958 [2024-11-19 00:10:18.563591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:59552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.958 [2024-11-19 00:10:18.563619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.958 [2024-11-19 00:10:18.563636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:59680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.958 [2024-11-19 00:10:18.563653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.958 [2024-11-19 00:10:18.563679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:59688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.958 [2024-11-19 00:10:18.563699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.958 [2024-11-19 00:10:18.563715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:59696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.958 [2024-11-19 00:10:18.563732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.958 [2024-11-19 00:10:18.563747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:59704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.958 [2024-11-19 00:10:18.563766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.958 [2024-11-19 00:10:18.563783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:59712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.958 [2024-11-19 00:10:18.563799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.958 [2024-11-19 00:10:18.563815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:59720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.958 [2024-11-19 00:10:18.563831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.958 [2024-11-19 00:10:18.563848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:59728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.958 [2024-11-19 00:10:18.563866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.958 [2024-11-19 00:10:18.563882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:59736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.958 [2024-11-19 00:10:18.563898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.958 [2024-11-19 00:10:18.563913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:59744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.958 [2024-11-19 00:10:18.563929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.958 [2024-11-19 00:10:18.563945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:59752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.958 [2024-11-19 00:10:18.563961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.958 [2024-11-19 00:10:18.563977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:59760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.958 [2024-11-19 00:10:18.563993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.958 [2024-11-19 00:10:18.564009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:59768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.958 [2024-11-19 00:10:18.564025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.958 [2024-11-19 00:10:18.564041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:59776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.958 [2024-11-19 00:10:18.564057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.958 [2024-11-19 00:10:18.564073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:59784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.958 [2024-11-19 00:10:18.564089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.958 [2024-11-19 00:10:18.564104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:59792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.958 [2024-11-19 00:10:18.564124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.958 [2024-11-19 00:10:18.564140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:59800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.958 [2024-11-19 00:10:18.564156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.958 [2024-11-19 00:10:18.564172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:59808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.958 [2024-11-19 00:10:18.564188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.958 [2024-11-19 00:10:18.564204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:59816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.958 [2024-11-19 00:10:18.564220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.958 [2024-11-19 00:10:18.564235] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b500 is same with the state(6) to be set 00:26:11.958 [2024-11-19 00:10:18.564256] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:11.958 [2024-11-19 00:10:18.564269] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:11.958 [2024-11-19 00:10:18.564287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:59824 len:8 PRP1 0x0 PRP2 0x0 00:26:11.958 [2024-11-19 00:10:18.564301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.958 [2024-11-19 00:10:18.564720] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:11.959 [2024-11-19 00:10:18.564757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.959 [2024-11-19 00:10:18.564776] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:11.959 [2024-11-19 00:10:18.564795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.959 [2024-11-19 00:10:18.564810] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:11.959 [2024-11-19 00:10:18.564826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.959 [2024-11-19 00:10:18.564841] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:11.959 [2024-11-19 00:10:18.564857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.959 [2024-11-19 00:10:18.564871] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(6) to be set 00:26:11.959 [2024-11-19 00:10:18.565136] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:11.959 [2024-11-19 00:10:18.565187] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:26:11.959 [2024-11-19 00:10:18.565315] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.959 [2024-11-19 00:10:18.565349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ab00 with addr=10.0.0.3, port=4420 00:26:11.959 [2024-11-19 00:10:18.565366] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(6) to be set 00:26:11.959 [2024-11-19 00:10:18.565397] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:26:11.959 [2024-11-19 00:10:18.565422] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:11.959 [2024-11-19 00:10:18.565440] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:11.959 [2024-11-19 00:10:18.565457] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:11.959 [2024-11-19 00:10:18.565477] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:11.959 [2024-11-19 00:10:18.565493] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:11.959 00:10:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:26:12.894 3675.50 IOPS, 14.36 MiB/s [2024-11-19T00:10:19.586Z] [2024-11-19 00:10:19.565632] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.894 [2024-11-19 00:10:19.565694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ab00 with addr=10.0.0.3, port=4420 00:26:12.894 [2024-11-19 00:10:19.565713] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(6) to be set 00:26:12.894 [2024-11-19 00:10:19.565745] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:26:12.894 [2024-11-19 00:10:19.565770] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:12.894 [2024-11-19 00:10:19.565786] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:12.894 [2024-11-19 00:10:19.565800] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:12.894 [2024-11-19 00:10:19.565816] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:12.894 [2024-11-19 00:10:19.565829] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:13.153 00:10:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:26:13.412 [2024-11-19 00:10:19.850185] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:26:13.412 00:10:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@92 -- # wait 88360 00:26:13.978 2450.33 IOPS, 9.57 MiB/s [2024-11-19T00:10:20.670Z] [2024-11-19 00:10:20.578204] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:26:15.850 1837.75 IOPS, 7.18 MiB/s [2024-11-19T00:10:23.479Z] 2938.00 IOPS, 11.48 MiB/s [2024-11-19T00:10:24.855Z] 3911.67 IOPS, 15.28 MiB/s [2024-11-19T00:10:25.791Z] 4588.00 IOPS, 17.92 MiB/s [2024-11-19T00:10:26.727Z] 5097.75 IOPS, 19.91 MiB/s [2024-11-19T00:10:27.664Z] 5497.56 IOPS, 21.47 MiB/s [2024-11-19T00:10:27.664Z] 5815.00 IOPS, 22.71 MiB/s 00:26:20.972 Latency(us) 00:26:20.972 [2024-11-19T00:10:27.664Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:20.972 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:20.972 Verification LBA range: start 0x0 length 0x4000 00:26:20.972 NVMe0n1 : 10.01 5821.33 22.74 0.00 0.00 21951.35 1727.77 3035150.89 00:26:20.972 [2024-11-19T00:10:27.664Z] =================================================================================================================== 00:26:20.972 [2024-11-19T00:10:27.664Z] Total : 5821.33 22.74 0.00 0.00 21951.35 1727.77 3035150.89 00:26:20.972 { 00:26:20.972 "results": [ 00:26:20.972 { 00:26:20.972 "job": "NVMe0n1", 00:26:20.972 "core_mask": "0x4", 00:26:20.972 "workload": "verify", 00:26:20.972 "status": "finished", 00:26:20.972 "verify_range": { 00:26:20.972 "start": 0, 00:26:20.972 "length": 16384 00:26:20.972 }, 00:26:20.972 "queue_depth": 128, 00:26:20.972 "io_size": 4096, 00:26:20.972 "runtime": 10.012491, 00:26:20.972 "iops": 5821.3285784726295, 00:26:20.972 "mibps": 22.73956475965871, 00:26:20.972 "io_failed": 0, 00:26:20.972 "io_timeout": 0, 00:26:20.972 "avg_latency_us": 21951.34553190693, 00:26:20.972 "min_latency_us": 1727.7672727272727, 00:26:20.972 "max_latency_us": 3035150.8945454545 00:26:20.972 } 00:26:20.972 ], 00:26:20.972 "core_count": 1 00:26:20.972 } 00:26:20.972 00:10:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=88465 00:26:20.972 00:10:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:20.972 00:10:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:26:20.972 Running I/O for 10 seconds... 00:26:21.908 00:10:28 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:26:22.170 6566.00 IOPS, 25.65 MiB/s [2024-11-19T00:10:28.862Z] [2024-11-19 00:10:28.689834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:22.170 [2024-11-19 00:10:28.690068] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:22.170 [2024-11-19 00:10:28.690090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:22.170 [2024-11-19 00:10:28.690107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:22.170 [2024-11-19 00:10:28.690118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:22.170 [2024-11-19 00:10:28.690131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:22.170 [2024-11-19 00:10:28.690142] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:22.170 [2024-11-19 00:10:28.690155] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:22.170 [2024-11-19 00:10:28.690165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:22.170 [2024-11-19 00:10:28.690178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:22.170 [2024-11-19 00:10:28.690189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:22.170 [2024-11-19 00:10:28.690201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:22.170 [2024-11-19 00:10:28.690212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:22.170 [2024-11-19 00:10:28.690224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:22.170 [2024-11-19 00:10:28.690235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:22.170 [2024-11-19 00:10:28.690248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:22.170 [2024-11-19 00:10:28.690259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:22.170 [2024-11-19 00:10:28.690271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:22.170 [2024-11-19 00:10:28.690282] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:22.170 [2024-11-19 00:10:28.690297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:22.170 [2024-11-19 00:10:28.690307] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:22.170 [2024-11-19 00:10:28.690320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:22.170 [2024-11-19 00:10:28.690330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:22.170 [2024-11-19 00:10:28.690343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:22.170 [2024-11-19 00:10:28.690354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:22.170 [2024-11-19 00:10:28.690367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:22.170 [2024-11-19 00:10:28.690392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:22.170 [2024-11-19 00:10:28.690406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:22.170 [2024-11-19 00:10:28.690429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:22.170 [2024-11-19 00:10:28.690441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:22.170 [2024-11-19 00:10:28.690452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:22.170 [2024-11-19 00:10:28.690464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:22.170 [2024-11-19 00:10:28.690475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:22.170 [2024-11-19 00:10:28.690487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:22.170 [2024-11-19 00:10:28.690498] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:22.171 [2024-11-19 00:10:28.690511] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:22.171 [2024-11-19 00:10:28.690522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:22.171 [2024-11-19 00:10:28.690534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:22.171 [2024-11-19 00:10:28.690560] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:22.171 [2024-11-19 00:10:28.690571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:22.171 [2024-11-19 00:10:28.690581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:22.171 [2024-11-19 00:10:28.690593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:22.171 [2024-11-19 00:10:28.690603] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:22.171 [2024-11-19 00:10:28.690631] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:22.171 [2024-11-19 00:10:28.690658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:22.171 [2024-11-19 00:10:28.690689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:22.171 [2024-11-19 00:10:28.690701] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:22.171 [2024-11-19 00:10:28.690713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:22.171 [2024-11-19 00:10:28.690724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:22.171 [2024-11-19 00:10:28.690737] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:22.171 [2024-11-19 00:10:28.690748] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:22.171 [2024-11-19 00:10:28.690762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:22.171 [2024-11-19 00:10:28.690773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:22.171 [2024-11-19 00:10:28.690788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:22.171 [2024-11-19 00:10:28.690798] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:22.171 [2024-11-19 00:10:28.690810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:22.171 [2024-11-19 00:10:28.690821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:22.171 [2024-11-19 00:10:28.690834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:22.171 [2024-11-19 00:10:28.690845] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:22.171 [2024-11-19 00:10:28.690857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:22.171 [2024-11-19 00:10:28.690868] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:22.171 [2024-11-19 00:10:28.690880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:22.171 [2024-11-19 00:10:28.690891] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:22.171 [2024-11-19 00:10:28.690903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:22.171 [2024-11-19 00:10:28.690913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:22.171 [2024-11-19 00:10:28.690925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:22.171 [2024-11-19 00:10:28.690937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:22.171 [2024-11-19 00:10:28.690967] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:22.171 [2024-11-19 00:10:28.691001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:22.171 [2024-11-19 00:10:28.691014] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:22.171 [2024-11-19 00:10:28.691026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:22.171 [2024-11-19 00:10:28.691038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:22.171 [2024-11-19 00:10:28.691050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:22.171 [2024-11-19 00:10:28.691063] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:22.171 [2024-11-19 00:10:28.691074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:22.171 [2024-11-19 00:10:28.691087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:22.171 [2024-11-19 00:10:28.691098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:22.171 [2024-11-19 00:10:28.691111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:22.171 [2024-11-19 00:10:28.691123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:22.171 [2024-11-19 00:10:28.691136] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:22.171 [2024-11-19 00:10:28.691147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:22.171 [2024-11-19 00:10:28.691162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:22.171 [2024-11-19 00:10:28.691173] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:22.171 [2024-11-19 00:10:28.691188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:22.171 [2024-11-19 00:10:28.691200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:22.171 [2024-11-19 00:10:28.691213] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:22.171 [2024-11-19 00:10:28.691225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:22.171 [2024-11-19 00:10:28.691237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:22.171 [2024-11-19 00:10:28.691249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:22.171 [2024-11-19 00:10:28.691264] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:22.171 [2024-11-19 00:10:28.691275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:22.171 [2024-11-19 00:10:28.691288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:22.171 [2024-11-19 00:10:28.691299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:22.171 [2024-11-19 00:10:28.691313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:22.171 [2024-11-19 00:10:28.691325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:22.171 [2024-11-19 00:10:28.691338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:22.171 [2024-11-19 00:10:28.691364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:22.171 [2024-11-19 00:10:28.691377] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:22.171 [2024-11-19 00:10:28.691389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:22.171 [2024-11-19 00:10:28.691404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:22.171 [2024-11-19 00:10:28.691415] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:22.171 [2024-11-19 00:10:28.691427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:22.171 [2024-11-19 00:10:28.691438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:22.171 [2024-11-19 00:10:28.691451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:22.171 [2024-11-19 00:10:28.691462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:22.171 [2024-11-19 00:10:28.691474] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:22.171 [2024-11-19 00:10:28.691485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:22.171 [2024-11-19 00:10:28.691500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:22.171 [2024-11-19 00:10:28.691511] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:22.171 [2024-11-19 00:10:28.691585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:57768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.171 [2024-11-19 00:10:28.691623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.171 [2024-11-19 00:10:28.691668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:57776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.171 [2024-11-19 00:10:28.691687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.171 [2024-11-19 00:10:28.691704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:57784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.171 [2024-11-19 00:10:28.691718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.171 [2024-11-19 00:10:28.691735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:57792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.171 [2024-11-19 00:10:28.691749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.172 [2024-11-19 00:10:28.691764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:57800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.172 [2024-11-19 00:10:28.691777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.172 [2024-11-19 00:10:28.691793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:57808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.172 [2024-11-19 00:10:28.691806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.172 [2024-11-19 00:10:28.691821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:57816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.172 [2024-11-19 00:10:28.691834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.172 [2024-11-19 00:10:28.691850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:57824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.172 [2024-11-19 00:10:28.691863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.172 [2024-11-19 00:10:28.691878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:57832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.172 [2024-11-19 00:10:28.691891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.172 [2024-11-19 00:10:28.691907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:57840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.172 [2024-11-19 00:10:28.691920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.172 [2024-11-19 00:10:28.691935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:57848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.172 [2024-11-19 00:10:28.691948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.172 [2024-11-19 00:10:28.691963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:57856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.172 [2024-11-19 00:10:28.691976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.172 [2024-11-19 00:10:28.691991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:57864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.172 [2024-11-19 00:10:28.692004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.172 [2024-11-19 00:10:28.692019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:57872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.172 [2024-11-19 00:10:28.692033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.172 [2024-11-19 00:10:28.692048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:57880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.172 [2024-11-19 00:10:28.692061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.172 [2024-11-19 00:10:28.692076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:57888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.172 [2024-11-19 00:10:28.692089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.172 [2024-11-19 00:10:28.692105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:57896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.172 [2024-11-19 00:10:28.692119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.172 [2024-11-19 00:10:28.692135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:57904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.172 [2024-11-19 00:10:28.692148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.172 [2024-11-19 00:10:28.692164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:57912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.172 [2024-11-19 00:10:28.692178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.172 [2024-11-19 00:10:28.692193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:57920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.172 [2024-11-19 00:10:28.692206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.172 [2024-11-19 00:10:28.692221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:57928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.172 [2024-11-19 00:10:28.692235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.172 [2024-11-19 00:10:28.692250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:57936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.172 [2024-11-19 00:10:28.692263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.172 [2024-11-19 00:10:28.692278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:57944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.172 [2024-11-19 00:10:28.692291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.172 [2024-11-19 00:10:28.692307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:57952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.172 [2024-11-19 00:10:28.692329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.172 [2024-11-19 00:10:28.692364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:57960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.172 [2024-11-19 00:10:28.692377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.172 [2024-11-19 00:10:28.692393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:57968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.172 [2024-11-19 00:10:28.692407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.172 [2024-11-19 00:10:28.692423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:57976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.172 [2024-11-19 00:10:28.692464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.172 [2024-11-19 00:10:28.692482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:57984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.172 [2024-11-19 00:10:28.692496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.172 [2024-11-19 00:10:28.692512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:57992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.172 [2024-11-19 00:10:28.692532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.172 [2024-11-19 00:10:28.692549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:58000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.172 [2024-11-19 00:10:28.692563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.172 [2024-11-19 00:10:28.692578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:58008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.172 [2024-11-19 00:10:28.692592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.172 [2024-11-19 00:10:28.692622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:58016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.172 [2024-11-19 00:10:28.692638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.172 [2024-11-19 00:10:28.692654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:58024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.172 [2024-11-19 00:10:28.692668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.172 [2024-11-19 00:10:28.692698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:58032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.172 [2024-11-19 00:10:28.692711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.172 [2024-11-19 00:10:28.692727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:58040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.172 [2024-11-19 00:10:28.692740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.172 [2024-11-19 00:10:28.692755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:58048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.172 [2024-11-19 00:10:28.692768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.172 [2024-11-19 00:10:28.692784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:58056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.172 [2024-11-19 00:10:28.692798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.172 [2024-11-19 00:10:28.692813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:58064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.172 [2024-11-19 00:10:28.692826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.172 [2024-11-19 00:10:28.692842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:58072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.172 [2024-11-19 00:10:28.692855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.172 [2024-11-19 00:10:28.692870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:58080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.172 [2024-11-19 00:10:28.692884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.172 [2024-11-19 00:10:28.692899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:58088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.172 [2024-11-19 00:10:28.692912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.172 [2024-11-19 00:10:28.692927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:58096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.172 [2024-11-19 00:10:28.692941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.172 [2024-11-19 00:10:28.692956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:58104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.172 [2024-11-19 00:10:28.692969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.172 [2024-11-19 00:10:28.692985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:58112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.173 [2024-11-19 00:10:28.693014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.173 [2024-11-19 00:10:28.693029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:58120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.173 [2024-11-19 00:10:28.693046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.173 [2024-11-19 00:10:28.693062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:58128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.173 [2024-11-19 00:10:28.693090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.173 [2024-11-19 00:10:28.693105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:58136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.173 [2024-11-19 00:10:28.693118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.173 [2024-11-19 00:10:28.693133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:58144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.173 [2024-11-19 00:10:28.693146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.173 [2024-11-19 00:10:28.693162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:58152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.173 [2024-11-19 00:10:28.693191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.173 [2024-11-19 00:10:28.693207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:58160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.173 [2024-11-19 00:10:28.693220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.173 [2024-11-19 00:10:28.693236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:58168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.173 [2024-11-19 00:10:28.693250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.173 [2024-11-19 00:10:28.693266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:58176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.173 [2024-11-19 00:10:28.693279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.173 [2024-11-19 00:10:28.693295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:58184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.173 [2024-11-19 00:10:28.693309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.173 [2024-11-19 00:10:28.693325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:58192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.173 [2024-11-19 00:10:28.693338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.173 [2024-11-19 00:10:28.693354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:58200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.173 [2024-11-19 00:10:28.693367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.173 [2024-11-19 00:10:28.693383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:58208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.173 [2024-11-19 00:10:28.693397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.173 [2024-11-19 00:10:28.693412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:58216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.173 [2024-11-19 00:10:28.693441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.173 [2024-11-19 00:10:28.693456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:58224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.173 [2024-11-19 00:10:28.693469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.173 [2024-11-19 00:10:28.693485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:58232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.173 [2024-11-19 00:10:28.693498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.173 [2024-11-19 00:10:28.693513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:58240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.173 [2024-11-19 00:10:28.693526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.173 [2024-11-19 00:10:28.693541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:58248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.173 [2024-11-19 00:10:28.693557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.173 [2024-11-19 00:10:28.693587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:58256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.173 [2024-11-19 00:10:28.693600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.173 [2024-11-19 00:10:28.693614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:58264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.173 [2024-11-19 00:10:28.693627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.173 [2024-11-19 00:10:28.693642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:58272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.173 [2024-11-19 00:10:28.693654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.173 [2024-11-19 00:10:28.693685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:58280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.173 [2024-11-19 00:10:28.693707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.173 [2024-11-19 00:10:28.693726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:58288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.173 [2024-11-19 00:10:28.693740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.173 [2024-11-19 00:10:28.693755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:58296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.173 [2024-11-19 00:10:28.693769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.173 [2024-11-19 00:10:28.693784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:58304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.173 [2024-11-19 00:10:28.693797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.173 [2024-11-19 00:10:28.693813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:58312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.173 [2024-11-19 00:10:28.693826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.173 [2024-11-19 00:10:28.693841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:58320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.173 [2024-11-19 00:10:28.693854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.173 [2024-11-19 00:10:28.693869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:58328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.173 [2024-11-19 00:10:28.693883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.173 [2024-11-19 00:10:28.693898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:58336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.173 [2024-11-19 00:10:28.693911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.173 [2024-11-19 00:10:28.693927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:58344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.173 [2024-11-19 00:10:28.693940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.173 [2024-11-19 00:10:28.693956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:58352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.173 [2024-11-19 00:10:28.693969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.173 [2024-11-19 00:10:28.693984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:58360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.173 [2024-11-19 00:10:28.693997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.173 [2024-11-19 00:10:28.694012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:58368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.173 [2024-11-19 00:10:28.694025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.173 [2024-11-19 00:10:28.694041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:58376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.173 [2024-11-19 00:10:28.694057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.173 [2024-11-19 00:10:28.694073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:58384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.173 [2024-11-19 00:10:28.694087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.173 [2024-11-19 00:10:28.694102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:58392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.173 [2024-11-19 00:10:28.694115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.173 [2024-11-19 00:10:28.694131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:58400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.173 [2024-11-19 00:10:28.694144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.173 [2024-11-19 00:10:28.694159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:58408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.173 [2024-11-19 00:10:28.694176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.174 [2024-11-19 00:10:28.694192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:58416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.174 [2024-11-19 00:10:28.694205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.174 [2024-11-19 00:10:28.694221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:58424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.174 [2024-11-19 00:10:28.694234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.174 [2024-11-19 00:10:28.694249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:58432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.174 [2024-11-19 00:10:28.694262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.174 [2024-11-19 00:10:28.694277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:58440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.174 [2024-11-19 00:10:28.694291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.174 [2024-11-19 00:10:28.694306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:58448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.174 [2024-11-19 00:10:28.694319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.174 [2024-11-19 00:10:28.694335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:58456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.174 [2024-11-19 00:10:28.694348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.174 [2024-11-19 00:10:28.694364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:58464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.174 [2024-11-19 00:10:28.694377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.174 [2024-11-19 00:10:28.694392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:58472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.174 [2024-11-19 00:10:28.694405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.174 [2024-11-19 00:10:28.694421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:58480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.174 [2024-11-19 00:10:28.694434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.174 [2024-11-19 00:10:28.694449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:58488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.174 [2024-11-19 00:10:28.694476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.174 [2024-11-19 00:10:28.694492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:58496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.174 [2024-11-19 00:10:28.694506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.174 [2024-11-19 00:10:28.694521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:58504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.174 [2024-11-19 00:10:28.694536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.174 [2024-11-19 00:10:28.694552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:58512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.174 [2024-11-19 00:10:28.694565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.174 [2024-11-19 00:10:28.694580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:58520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.174 [2024-11-19 00:10:28.694593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.174 [2024-11-19 00:10:28.694624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:58528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.174 [2024-11-19 00:10:28.694638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.174 [2024-11-19 00:10:28.694653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:58536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.174 [2024-11-19 00:10:28.694668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.174 [2024-11-19 00:10:28.694685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:58544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.174 [2024-11-19 00:10:28.694698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.174 [2024-11-19 00:10:28.694714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:58552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.174 [2024-11-19 00:10:28.694727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.174 [2024-11-19 00:10:28.694742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:58560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.174 [2024-11-19 00:10:28.694756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.174 [2024-11-19 00:10:28.694771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:58568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.174 [2024-11-19 00:10:28.694784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.174 [2024-11-19 00:10:28.694799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:58576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.174 [2024-11-19 00:10:28.694812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.174 [2024-11-19 00:10:28.694827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:58584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.174 [2024-11-19 00:10:28.694840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.174 [2024-11-19 00:10:28.694855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:58592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.174 [2024-11-19 00:10:28.694869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.174 [2024-11-19 00:10:28.694884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:58600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.174 [2024-11-19 00:10:28.694897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.174 [2024-11-19 00:10:28.694912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:58608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.174 [2024-11-19 00:10:28.694926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.174 [2024-11-19 00:10:28.694957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:58616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.174 [2024-11-19 00:10:28.694971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.174 [2024-11-19 00:10:28.694986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:58624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.174 [2024-11-19 00:10:28.695000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.174 [2024-11-19 00:10:28.695016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:58632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.174 [2024-11-19 00:10:28.695032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.174 [2024-11-19 00:10:28.695048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:58640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.174 [2024-11-19 00:10:28.695061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.174 [2024-11-19 00:10:28.695078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:58648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.174 [2024-11-19 00:10:28.695091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.174 [2024-11-19 00:10:28.695108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:58672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.174 [2024-11-19 00:10:28.695136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.174 [2024-11-19 00:10:28.695166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:58680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.174 [2024-11-19 00:10:28.695182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.174 [2024-11-19 00:10:28.695197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:58688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.174 [2024-11-19 00:10:28.695210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.174 [2024-11-19 00:10:28.695225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.174 [2024-11-19 00:10:28.695237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.174 [2024-11-19 00:10:28.695252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:58704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.174 [2024-11-19 00:10:28.695265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.174 [2024-11-19 00:10:28.695295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:58712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.174 [2024-11-19 00:10:28.695308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.174 [2024-11-19 00:10:28.695323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:58720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.174 [2024-11-19 00:10:28.695336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.174 [2024-11-19 00:10:28.695368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:58728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.174 [2024-11-19 00:10:28.695397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.174 [2024-11-19 00:10:28.695412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:58736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.174 [2024-11-19 00:10:28.695425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.174 [2024-11-19 00:10:28.695440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:58744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.175 [2024-11-19 00:10:28.695454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.175 [2024-11-19 00:10:28.695469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:58752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.175 [2024-11-19 00:10:28.695483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.175 [2024-11-19 00:10:28.695498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:58760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.175 [2024-11-19 00:10:28.695511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.175 [2024-11-19 00:10:28.695526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:58768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.175 [2024-11-19 00:10:28.695539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.175 [2024-11-19 00:10:28.695572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:58776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.175 [2024-11-19 00:10:28.695588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.175 [2024-11-19 00:10:28.695604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:58784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.175 [2024-11-19 00:10:28.695618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.175 [2024-11-19 00:10:28.695634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:58656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.175 [2024-11-19 00:10:28.695648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.175 [2024-11-19 00:10:28.695663] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002bc80 is same with the state(6) to be set 00:26:22.175 [2024-11-19 00:10:28.695681] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:22.175 [2024-11-19 00:10:28.695693] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:22.175 [2024-11-19 00:10:28.695719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:58664 len:8 PRP1 0x0 PRP2 0x0 00:26:22.175 [2024-11-19 00:10:28.695734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.175 [2024-11-19 00:10:28.696079] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:22.175 [2024-11-19 00:10:28.696124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.175 [2024-11-19 00:10:28.696141] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:22.175 [2024-11-19 00:10:28.696154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.175 [2024-11-19 00:10:28.696168] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:22.175 [2024-11-19 00:10:28.696180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.175 [2024-11-19 00:10:28.696194] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:22.175 [2024-11-19 00:10:28.696223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.175 [2024-11-19 00:10:28.696236] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(6) to be set 00:26:22.175 [2024-11-19 00:10:28.696518] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:26:22.175 [2024-11-19 00:10:28.696552] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:26:22.175 [2024-11-19 00:10:28.696691] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.175 [2024-11-19 00:10:28.696730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ab00 with addr=10.0.0.3, port=4420 00:26:22.175 [2024-11-19 00:10:28.696747] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(6) to be set 00:26:22.175 [2024-11-19 00:10:28.696775] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:26:22.175 [2024-11-19 00:10:28.696800] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:26:22.175 [2024-11-19 00:10:28.696815] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:26:22.175 [2024-11-19 00:10:28.696830] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:26:22.175 [2024-11-19 00:10:28.696845] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:26:22.175 [2024-11-19 00:10:28.696860] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:26:22.175 00:10:28 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:26:23.140 3610.50 IOPS, 14.10 MiB/s [2024-11-19T00:10:29.832Z] [2024-11-19 00:10:29.710485] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.140 [2024-11-19 00:10:29.710561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ab00 with addr=10.0.0.3, port=4420 00:26:23.140 [2024-11-19 00:10:29.710582] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(6) to be set 00:26:23.140 [2024-11-19 00:10:29.710630] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:26:23.140 [2024-11-19 00:10:29.710675] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:26:23.140 [2024-11-19 00:10:29.710689] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:26:23.140 [2024-11-19 00:10:29.710704] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:26:23.140 [2024-11-19 00:10:29.710719] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:26:23.140 [2024-11-19 00:10:29.710749] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:26:24.091 2407.00 IOPS, 9.40 MiB/s [2024-11-19T00:10:30.783Z] [2024-11-19 00:10:30.710890] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.091 [2024-11-19 00:10:30.710967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ab00 with addr=10.0.0.3, port=4420 00:26:24.091 [2024-11-19 00:10:30.710988] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(6) to be set 00:26:24.091 [2024-11-19 00:10:30.711019] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:26:24.091 [2024-11-19 00:10:30.711045] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:26:24.091 [2024-11-19 00:10:30.711059] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:26:24.091 [2024-11-19 00:10:30.711072] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:26:24.091 [2024-11-19 00:10:30.711087] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:26:24.091 [2024-11-19 00:10:30.711101] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:26:25.029 1805.25 IOPS, 7.05 MiB/s [2024-11-19T00:10:31.721Z] [2024-11-19 00:10:31.711552] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.029 [2024-11-19 00:10:31.711784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ab00 with addr=10.0.0.3, port=4420 00:26:25.029 [2024-11-19 00:10:31.711817] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(6) to be set 00:26:25.029 [2024-11-19 00:10:31.712073] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:26:25.029 [2024-11-19 00:10:31.712370] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:26:25.029 [2024-11-19 00:10:31.712391] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:26:25.029 [2024-11-19 00:10:31.712407] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:26:25.029 [2024-11-19 00:10:31.712423] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:26:25.029 [2024-11-19 00:10:31.712438] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:26:25.288 00:10:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:26:25.547 [2024-11-19 00:10:31.982687] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:26:25.547 00:10:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@103 -- # wait 88465 00:26:26.115 1444.20 IOPS, 5.64 MiB/s [2024-11-19T00:10:32.807Z] [2024-11-19 00:10:32.737710] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 4] Resetting controller successful. 00:26:27.992 2453.00 IOPS, 9.58 MiB/s [2024-11-19T00:10:35.620Z] 3369.71 IOPS, 13.16 MiB/s [2024-11-19T00:10:36.998Z] 4052.50 IOPS, 15.83 MiB/s [2024-11-19T00:10:37.937Z] 4581.56 IOPS, 17.90 MiB/s [2024-11-19T00:10:37.937Z] 5009.40 IOPS, 19.57 MiB/s 00:26:31.245 Latency(us) 00:26:31.245 [2024-11-19T00:10:37.937Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:31.245 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:31.245 Verification LBA range: start 0x0 length 0x4000 00:26:31.245 NVMe0n1 : 10.01 5016.42 19.60 4153.44 0.00 13925.42 610.68 3035150.89 00:26:31.245 [2024-11-19T00:10:37.937Z] =================================================================================================================== 00:26:31.245 [2024-11-19T00:10:37.937Z] Total : 5016.42 19.60 4153.44 0.00 13925.42 0.00 3035150.89 00:26:31.245 { 00:26:31.245 "results": [ 00:26:31.245 { 00:26:31.245 "job": "NVMe0n1", 00:26:31.245 "core_mask": "0x4", 00:26:31.245 "workload": "verify", 00:26:31.245 "status": "finished", 00:26:31.245 "verify_range": { 00:26:31.245 "start": 0, 00:26:31.245 "length": 16384 00:26:31.245 }, 00:26:31.245 "queue_depth": 128, 00:26:31.245 "io_size": 4096, 00:26:31.245 "runtime": 10.009531, 00:26:31.245 "iops": 5016.418851192928, 00:26:31.245 "mibps": 19.595386137472374, 00:26:31.245 "io_failed": 41574, 00:26:31.245 "io_timeout": 0, 00:26:31.245 "avg_latency_us": 13925.416270494807, 00:26:31.245 "min_latency_us": 610.6763636363636, 00:26:31.245 "max_latency_us": 3035150.8945454545 00:26:31.245 } 00:26:31.245 ], 00:26:31.245 "core_count": 1 00:26:31.245 } 00:26:31.245 00:10:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 88338 00:26:31.245 00:10:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 88338 ']' 00:26:31.245 00:10:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 88338 00:26:31.245 00:10:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:26:31.245 00:10:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:31.245 00:10:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88338 00:26:31.245 killing process with pid 88338 00:26:31.245 Received shutdown signal, test time was about 10.000000 seconds 00:26:31.245 00:26:31.245 Latency(us) 00:26:31.245 [2024-11-19T00:10:37.937Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:31.245 [2024-11-19T00:10:37.937Z] =================================================================================================================== 00:26:31.245 [2024-11-19T00:10:37.937Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:31.245 00:10:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:26:31.245 00:10:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:26:31.245 00:10:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88338' 00:26:31.245 00:10:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 88338 00:26:31.245 00:10:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 88338 00:26:31.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:31.814 00:10:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=88583 00:26:31.814 00:10:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:26:31.814 00:10:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 88583 /var/tmp/bdevperf.sock 00:26:31.814 00:10:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 88583 ']' 00:26:31.814 00:10:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:31.814 00:10:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:31.814 00:10:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:31.814 00:10:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:31.814 00:10:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:32.073 [2024-11-19 00:10:38.502795] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:26:32.073 [2024-11-19 00:10:38.504051] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88583 ] 00:26:32.073 [2024-11-19 00:10:38.683392] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:32.332 [2024-11-19 00:10:38.775577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:32.332 [2024-11-19 00:10:38.929022] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:26:32.903 00:10:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:32.903 00:10:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:26:32.903 00:10:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=88600 00:26:32.903 00:10:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 88583 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:26:32.903 00:10:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:26:33.163 00:10:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:26:33.422 NVMe0n1 00:26:33.422 00:10:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=88640 00:26:33.422 00:10:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:33.422 00:10:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:26:33.681 Running I/O for 10 seconds... 00:26:34.623 00:10:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:26:34.623 13970.00 IOPS, 54.57 MiB/s [2024-11-19T00:10:41.315Z] [2024-11-19 00:10:41.268600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.623 [2024-11-19 00:10:41.268691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.623 [2024-11-19 00:10:41.268708] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.623 [2024-11-19 00:10:41.268725] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.623 [2024-11-19 00:10:41.268737] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.623 [2024-11-19 00:10:41.268749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.623 [2024-11-19 00:10:41.268760] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.623 [2024-11-19 00:10:41.268773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.623 [2024-11-19 00:10:41.268784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.623 [2024-11-19 00:10:41.268796] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.623 [2024-11-19 00:10:41.268813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.623 [2024-11-19 00:10:41.268825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.623 [2024-11-19 00:10:41.268836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.623 [2024-11-19 00:10:41.268851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.623 [2024-11-19 00:10:41.268862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.623 [2024-11-19 00:10:41.268875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.623 [2024-11-19 00:10:41.268886] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.624 [2024-11-19 00:10:41.268898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.624 [2024-11-19 00:10:41.268924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.624 [2024-11-19 00:10:41.268953] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.624 [2024-11-19 00:10:41.268963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.624 [2024-11-19 00:10:41.268975] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.624 [2024-11-19 00:10:41.268985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.624 [2024-11-19 00:10:41.268997] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.624 [2024-11-19 00:10:41.269007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.624 [2024-11-19 00:10:41.269018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.624 [2024-11-19 00:10:41.269029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.624 [2024-11-19 00:10:41.269046] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.624 [2024-11-19 00:10:41.269056] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.624 [2024-11-19 00:10:41.269068] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.624 [2024-11-19 00:10:41.269078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.624 [2024-11-19 00:10:41.269090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.624 [2024-11-19 00:10:41.269100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.624 [2024-11-19 00:10:41.269112] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.624 [2024-11-19 00:10:41.269125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.624 [2024-11-19 00:10:41.269138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.624 [2024-11-19 00:10:41.269148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.624 [2024-11-19 00:10:41.269160] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.624 [2024-11-19 00:10:41.269170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.624 [2024-11-19 00:10:41.269184] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.624 [2024-11-19 00:10:41.269194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.624 [2024-11-19 00:10:41.269206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.624 [2024-11-19 00:10:41.269216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.624 [2024-11-19 00:10:41.269227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.624 [2024-11-19 00:10:41.269238] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.624 [2024-11-19 00:10:41.269249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.624 [2024-11-19 00:10:41.269259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.624 [2024-11-19 00:10:41.269271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.624 [2024-11-19 00:10:41.269281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.624 [2024-11-19 00:10:41.269293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.624 [2024-11-19 00:10:41.269303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.624 [2024-11-19 00:10:41.269336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.624 [2024-11-19 00:10:41.269347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.624 [2024-11-19 00:10:41.269359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.624 [2024-11-19 00:10:41.269369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.624 [2024-11-19 00:10:41.269381] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.624 [2024-11-19 00:10:41.269391] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.624 [2024-11-19 00:10:41.269402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.624 [2024-11-19 00:10:41.269412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.624 [2024-11-19 00:10:41.269424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.624 [2024-11-19 00:10:41.269434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.624 [2024-11-19 00:10:41.269446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.624 [2024-11-19 00:10:41.269456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.624 [2024-11-19 00:10:41.269467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.624 [2024-11-19 00:10:41.269477] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.624 [2024-11-19 00:10:41.269490] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.624 [2024-11-19 00:10:41.269502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.624 [2024-11-19 00:10:41.269516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.624 [2024-11-19 00:10:41.269526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.624 [2024-11-19 00:10:41.269538] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.624 [2024-11-19 00:10:41.269548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.624 [2024-11-19 00:10:41.269560] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.624 [2024-11-19 00:10:41.269570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.624 [2024-11-19 00:10:41.269582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.624 [2024-11-19 00:10:41.269591] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.624 [2024-11-19 00:10:41.269672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.624 [2024-11-19 00:10:41.269693] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.624 [2024-11-19 00:10:41.269707] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.624 [2024-11-19 00:10:41.269718] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.624 [2024-11-19 00:10:41.269730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.624 [2024-11-19 00:10:41.269741] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.625 [2024-11-19 00:10:41.269754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.625 [2024-11-19 00:10:41.269764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.625 [2024-11-19 00:10:41.269779] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.625 [2024-11-19 00:10:41.269790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.625 [2024-11-19 00:10:41.269803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.625 [2024-11-19 00:10:41.269814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.625 [2024-11-19 00:10:41.269826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.625 [2024-11-19 00:10:41.269837] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.625 [2024-11-19 00:10:41.269850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.625 [2024-11-19 00:10:41.269861] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.625 [2024-11-19 00:10:41.269875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.625 [2024-11-19 00:10:41.269902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.625 [2024-11-19 00:10:41.269930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.625 [2024-11-19 00:10:41.269942] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.625 [2024-11-19 00:10:41.269955] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.625 [2024-11-19 00:10:41.269967] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.625 [2024-11-19 00:10:41.270018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.625 [2024-11-19 00:10:41.270045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.625 [2024-11-19 00:10:41.270059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.625 [2024-11-19 00:10:41.270070] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.625 [2024-11-19 00:10:41.270083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.625 [2024-11-19 00:10:41.270093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.625 [2024-11-19 00:10:41.270105] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.625 [2024-11-19 00:10:41.270115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.625 [2024-11-19 00:10:41.270128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.625 [2024-11-19 00:10:41.270138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.625 [2024-11-19 00:10:41.270150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.625 [2024-11-19 00:10:41.270161] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.625 [2024-11-19 00:10:41.270173] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.625 [2024-11-19 00:10:41.270184] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.625 [2024-11-19 00:10:41.270196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.625 [2024-11-19 00:10:41.270207] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.625 [2024-11-19 00:10:41.270220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.625 [2024-11-19 00:10:41.270234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.625 [2024-11-19 00:10:41.270248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.625 [2024-11-19 00:10:41.270258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.625 [2024-11-19 00:10:41.270272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.625 [2024-11-19 00:10:41.270283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.625 [2024-11-19 00:10:41.270295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.625 [2024-11-19 00:10:41.270305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.625 [2024-11-19 00:10:41.270318] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.625 [2024-11-19 00:10:41.270328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.625 [2024-11-19 00:10:41.270321] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.625 [2024-11-19 00:10:41.270341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:26:34.625 [2024-11-19 00:10:41.270362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.625 [2024-11-19 00:10:41.270380] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.625 [2024-11-19 00:10:41.270395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.625 [2024-11-19 00:10:41.270409] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.625 [2024-11-19 00:10:41.270425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.625 [2024-11-19 00:10:41.270439] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.625 [2024-11-19 00:10:41.270457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.625 [2024-11-19 00:10:41.270470] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(6) to be set 00:26:34.625 [2024-11-19 00:10:41.270553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:86304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.625 [2024-11-19 00:10:41.270575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.625 [2024-11-19 00:10:41.270620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:113480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.625 [2024-11-19 00:10:41.270652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.625 [2024-11-19 00:10:41.270671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:18592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.625 [2024-11-19 00:10:41.270685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.625 [2024-11-19 00:10:41.270719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:125936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.625 [2024-11-19 00:10:41.270737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.625 [2024-11-19 00:10:41.270756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:57192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.625 [2024-11-19 00:10:41.270770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.625 [2024-11-19 00:10:41.270788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:72960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.625 [2024-11-19 00:10:41.270802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.625 [2024-11-19 00:10:41.270820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:62608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.625 [2024-11-19 00:10:41.270834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.625 [2024-11-19 00:10:41.270854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:28872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.625 [2024-11-19 00:10:41.270868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.625 [2024-11-19 00:10:41.270886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:119840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.625 [2024-11-19 00:10:41.270899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.625 [2024-11-19 00:10:41.270917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:126640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.625 [2024-11-19 00:10:41.270931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.626 [2024-11-19 00:10:41.270950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:1456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.626 [2024-11-19 00:10:41.270978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.626 [2024-11-19 00:10:41.271012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:97800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.626 [2024-11-19 00:10:41.271026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.626 [2024-11-19 00:10:41.271045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:103096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.626 [2024-11-19 00:10:41.271059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.626 [2024-11-19 00:10:41.271076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:71728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.626 [2024-11-19 00:10:41.271089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.626 [2024-11-19 00:10:41.271107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:99648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.626 [2024-11-19 00:10:41.271120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.626 [2024-11-19 00:10:41.271139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:127824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.626 [2024-11-19 00:10:41.271152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.626 [2024-11-19 00:10:41.271169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:48688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.626 [2024-11-19 00:10:41.271181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.626 [2024-11-19 00:10:41.271198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:66760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.626 [2024-11-19 00:10:41.271211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.626 [2024-11-19 00:10:41.271228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:82928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.626 [2024-11-19 00:10:41.271241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.626 [2024-11-19 00:10:41.271258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:121784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.626 [2024-11-19 00:10:41.271271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.626 [2024-11-19 00:10:41.271288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:35048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.626 [2024-11-19 00:10:41.271301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.626 [2024-11-19 00:10:41.271318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:29536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.626 [2024-11-19 00:10:41.271331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.626 [2024-11-19 00:10:41.271347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:100856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.626 [2024-11-19 00:10:41.271361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.626 [2024-11-19 00:10:41.271380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:40392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.626 [2024-11-19 00:10:41.271394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.626 [2024-11-19 00:10:41.271413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:9448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.626 [2024-11-19 00:10:41.271426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.626 [2024-11-19 00:10:41.271444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:108080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.626 [2024-11-19 00:10:41.271457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.626 [2024-11-19 00:10:41.271474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:9984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.626 [2024-11-19 00:10:41.271487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.626 [2024-11-19 00:10:41.271504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:117656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.626 [2024-11-19 00:10:41.271517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.626 [2024-11-19 00:10:41.271534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:37784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.626 [2024-11-19 00:10:41.271548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.626 [2024-11-19 00:10:41.271569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:78632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.626 [2024-11-19 00:10:41.271583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.626 [2024-11-19 00:10:41.271600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:90704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.626 [2024-11-19 00:10:41.271645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.626 [2024-11-19 00:10:41.271677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:78528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.626 [2024-11-19 00:10:41.271694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.626 [2024-11-19 00:10:41.271715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:54760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.626 [2024-11-19 00:10:41.271729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.626 [2024-11-19 00:10:41.271748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:5008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.626 [2024-11-19 00:10:41.271762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.626 [2024-11-19 00:10:41.271781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:101000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.626 [2024-11-19 00:10:41.271795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.626 [2024-11-19 00:10:41.271825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:19544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.626 [2024-11-19 00:10:41.271840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.626 [2024-11-19 00:10:41.271859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:130136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.626 [2024-11-19 00:10:41.271874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.626 [2024-11-19 00:10:41.271892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:29744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.626 [2024-11-19 00:10:41.271906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.626 [2024-11-19 00:10:41.271927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:9704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.626 [2024-11-19 00:10:41.271941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.626 [2024-11-19 00:10:41.271962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:19688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.626 [2024-11-19 00:10:41.271991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.626 [2024-11-19 00:10:41.272024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:63192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.626 [2024-11-19 00:10:41.272052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.626 [2024-11-19 00:10:41.272069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:66936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.626 [2024-11-19 00:10:41.272101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.626 [2024-11-19 00:10:41.272121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:98848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.626 [2024-11-19 00:10:41.272135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.626 [2024-11-19 00:10:41.272152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:62600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.626 [2024-11-19 00:10:41.272165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.626 [2024-11-19 00:10:41.272182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:74976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.626 [2024-11-19 00:10:41.272195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.626 [2024-11-19 00:10:41.272213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:19232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.626 [2024-11-19 00:10:41.272227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.626 [2024-11-19 00:10:41.272244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:107632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.626 [2024-11-19 00:10:41.272257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.626 [2024-11-19 00:10:41.272276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:130776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.626 [2024-11-19 00:10:41.272289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.626 [2024-11-19 00:10:41.272306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:99672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.626 [2024-11-19 00:10:41.272319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.627 [2024-11-19 00:10:41.272366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:60552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.627 [2024-11-19 00:10:41.272382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.627 [2024-11-19 00:10:41.272403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:47472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.627 [2024-11-19 00:10:41.272418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.627 [2024-11-19 00:10:41.272436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:45784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.627 [2024-11-19 00:10:41.272451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.627 [2024-11-19 00:10:41.272469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:59064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.627 [2024-11-19 00:10:41.272483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.627 [2024-11-19 00:10:41.272501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:99472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.627 [2024-11-19 00:10:41.272516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.627 [2024-11-19 00:10:41.272536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:50544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.627 [2024-11-19 00:10:41.272550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.627 [2024-11-19 00:10:41.272571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:70448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.627 [2024-11-19 00:10:41.272586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.627 [2024-11-19 00:10:41.272604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:30456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.627 [2024-11-19 00:10:41.272632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.627 [2024-11-19 00:10:41.272653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:29048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.627 [2024-11-19 00:10:41.272682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.627 [2024-11-19 00:10:41.272715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:111640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.627 [2024-11-19 00:10:41.272728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.627 [2024-11-19 00:10:41.272748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:19936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.627 [2024-11-19 00:10:41.272762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.627 [2024-11-19 00:10:41.272779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:103312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.627 [2024-11-19 00:10:41.272793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.627 [2024-11-19 00:10:41.272812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:17688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.627 [2024-11-19 00:10:41.272826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.627 [2024-11-19 00:10:41.272844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:125080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.627 [2024-11-19 00:10:41.272858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.627 [2024-11-19 00:10:41.272879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:3208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.627 [2024-11-19 00:10:41.272893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.627 [2024-11-19 00:10:41.272925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:39272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.627 [2024-11-19 00:10:41.272953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.627 [2024-11-19 00:10:41.272970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:103624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.627 [2024-11-19 00:10:41.272982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.627 [2024-11-19 00:10:41.272999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:69032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.627 [2024-11-19 00:10:41.273026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.627 [2024-11-19 00:10:41.273042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:123960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.627 [2024-11-19 00:10:41.273055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.627 [2024-11-19 00:10:41.273071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:87296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.627 [2024-11-19 00:10:41.273083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.627 [2024-11-19 00:10:41.273099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:76128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.627 [2024-11-19 00:10:41.273112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.627 [2024-11-19 00:10:41.273128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:99720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.627 [2024-11-19 00:10:41.273140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.627 [2024-11-19 00:10:41.273157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:83336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.627 [2024-11-19 00:10:41.273171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.627 [2024-11-19 00:10:41.273187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:37448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.627 [2024-11-19 00:10:41.273199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.627 [2024-11-19 00:10:41.273215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:102192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.627 [2024-11-19 00:10:41.273227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.627 [2024-11-19 00:10:41.273243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:80336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.627 [2024-11-19 00:10:41.273256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.627 [2024-11-19 00:10:41.273272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:117896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.627 [2024-11-19 00:10:41.273284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.627 [2024-11-19 00:10:41.273300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:85168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.627 [2024-11-19 00:10:41.273313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.627 [2024-11-19 00:10:41.273332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:67080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.627 [2024-11-19 00:10:41.273345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.627 [2024-11-19 00:10:41.273361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:11216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.627 [2024-11-19 00:10:41.273374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.627 [2024-11-19 00:10:41.273392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:94416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.627 [2024-11-19 00:10:41.273405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.627 [2024-11-19 00:10:41.273421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:32560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.627 [2024-11-19 00:10:41.273434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.627 [2024-11-19 00:10:41.273450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:29000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.627 [2024-11-19 00:10:41.273462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.627 [2024-11-19 00:10:41.273479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:42032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.627 [2024-11-19 00:10:41.273491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.627 [2024-11-19 00:10:41.273507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:71024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.627 [2024-11-19 00:10:41.273520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.627 [2024-11-19 00:10:41.273536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:103480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.627 [2024-11-19 00:10:41.273548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.627 [2024-11-19 00:10:41.273564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:1968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.627 [2024-11-19 00:10:41.273577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.627 [2024-11-19 00:10:41.273593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:25384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.627 [2024-11-19 00:10:41.273620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.627 [2024-11-19 00:10:41.273655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:55552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.627 [2024-11-19 00:10:41.273672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.627 [2024-11-19 00:10:41.273692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:79304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.628 [2024-11-19 00:10:41.273719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.628 [2024-11-19 00:10:41.273739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:25512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.628 [2024-11-19 00:10:41.273753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.628 [2024-11-19 00:10:41.273772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:102496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.628 [2024-11-19 00:10:41.273789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.628 [2024-11-19 00:10:41.273806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:124304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.628 [2024-11-19 00:10:41.273820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.628 [2024-11-19 00:10:41.273837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:40256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.628 [2024-11-19 00:10:41.273866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.628 [2024-11-19 00:10:41.273886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:37624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.628 [2024-11-19 00:10:41.273900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.628 [2024-11-19 00:10:41.273918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:27416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.628 [2024-11-19 00:10:41.273932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.628 [2024-11-19 00:10:41.273969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:8384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.628 [2024-11-19 00:10:41.273997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.628 [2024-11-19 00:10:41.274045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:66464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.628 [2024-11-19 00:10:41.274058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.628 [2024-11-19 00:10:41.274075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:42744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.628 [2024-11-19 00:10:41.274088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.628 [2024-11-19 00:10:41.274105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:72200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.628 [2024-11-19 00:10:41.274118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.628 [2024-11-19 00:10:41.274135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:56208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.628 [2024-11-19 00:10:41.274148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.628 [2024-11-19 00:10:41.274165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:86368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.628 [2024-11-19 00:10:41.274177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.628 [2024-11-19 00:10:41.274194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:91264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.628 [2024-11-19 00:10:41.274207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.628 [2024-11-19 00:10:41.274224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:97112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.628 [2024-11-19 00:10:41.274237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.628 [2024-11-19 00:10:41.274258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:107160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.628 [2024-11-19 00:10:41.274273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.628 [2024-11-19 00:10:41.274291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:28264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.628 [2024-11-19 00:10:41.274304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.628 [2024-11-19 00:10:41.274327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:87008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.628 [2024-11-19 00:10:41.274350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.628 [2024-11-19 00:10:41.274368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:79040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.628 [2024-11-19 00:10:41.274382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.628 [2024-11-19 00:10:41.274398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:28152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.628 [2024-11-19 00:10:41.274411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.628 [2024-11-19 00:10:41.274428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:119352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.628 [2024-11-19 00:10:41.274441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.628 [2024-11-19 00:10:41.274460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:61672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.628 [2024-11-19 00:10:41.274473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.628 [2024-11-19 00:10:41.274490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:66416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.628 [2024-11-19 00:10:41.274503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.628 [2024-11-19 00:10:41.274522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.628 [2024-11-19 00:10:41.274535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.628 [2024-11-19 00:10:41.274552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:127936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.628 [2024-11-19 00:10:41.274565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.628 [2024-11-19 00:10:41.274582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:56048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.628 [2024-11-19 00:10:41.274595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.628 [2024-11-19 00:10:41.274628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:52264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.628 [2024-11-19 00:10:41.274641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.628 [2024-11-19 00:10:41.274659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:60296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.628 [2024-11-19 00:10:41.274672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.628 [2024-11-19 00:10:41.274703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:39280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.628 [2024-11-19 00:10:41.274720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.628 [2024-11-19 00:10:41.274738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:115920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.628 [2024-11-19 00:10:41.274751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.628 [2024-11-19 00:10:41.274769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.628 [2024-11-19 00:10:41.274782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.628 [2024-11-19 00:10:41.274801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:128856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.628 [2024-11-19 00:10:41.274817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.628 [2024-11-19 00:10:41.274834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:26952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.628 [2024-11-19 00:10:41.274852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.629 [2024-11-19 00:10:41.274869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.629 [2024-11-19 00:10:41.274883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.629 [2024-11-19 00:10:41.274901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.629 [2024-11-19 00:10:41.274914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.629 [2024-11-19 00:10:41.274931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:72392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.629 [2024-11-19 00:10:41.274945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.629 [2024-11-19 00:10:41.274976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.629 [2024-11-19 00:10:41.274990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.629 [2024-11-19 00:10:41.275009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:9432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.629 [2024-11-19 00:10:41.275023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.629 [2024-11-19 00:10:41.275039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:119256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.629 [2024-11-19 00:10:41.275052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.629 [2024-11-19 00:10:41.275070] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b280 is same with the state(6) to be set 00:26:34.629 [2024-11-19 00:10:41.275090] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.629 [2024-11-19 00:10:41.275105] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.629 [2024-11-19 00:10:41.275118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96112 len:8 PRP1 0x0 PRP2 0x0 00:26:34.629 [2024-11-19 00:10:41.275133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.629 [2024-11-19 00:10:41.275711] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:26:34.629 [2024-11-19 00:10:41.275760] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:26:34.629 [2024-11-19 00:10:41.275910] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.629 [2024-11-19 00:10:41.275944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.3, port=4420 00:26:34.629 [2024-11-19 00:10:41.275979] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(6) to be set 00:26:34.629 [2024-11-19 00:10:41.276006] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:26:34.629 [2024-11-19 00:10:41.276032] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:26:34.629 [2024-11-19 00:10:41.276047] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:26:34.629 [2024-11-19 00:10:41.276063] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:26:34.629 [2024-11-19 00:10:41.276079] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:26:34.629 [2024-11-19 00:10:41.276098] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:26:34.629 00:10:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@128 -- # wait 88640 00:26:36.503 7557.50 IOPS, 29.52 MiB/s [2024-11-19T00:10:43.455Z] 5038.33 IOPS, 19.68 MiB/s [2024-11-19T00:10:43.455Z] [2024-11-19 00:10:43.291364] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.763 [2024-11-19 00:10:43.291436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.3, port=4420 00:26:36.763 [2024-11-19 00:10:43.291461] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(6) to be set 00:26:36.763 [2024-11-19 00:10:43.291494] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:26:36.763 [2024-11-19 00:10:43.291524] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:26:36.763 [2024-11-19 00:10:43.291537] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:26:36.763 [2024-11-19 00:10:43.291555] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:26:36.763 [2024-11-19 00:10:43.291570] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:26:36.763 [2024-11-19 00:10:43.291587] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:26:38.639 3778.75 IOPS, 14.76 MiB/s [2024-11-19T00:10:45.331Z] 3023.00 IOPS, 11.81 MiB/s [2024-11-19T00:10:45.331Z] [2024-11-19 00:10:45.291816] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.639 [2024-11-19 00:10:45.291906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.3, port=4420 00:26:38.639 [2024-11-19 00:10:45.291933] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(6) to be set 00:26:38.639 [2024-11-19 00:10:45.291967] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:26:38.639 [2024-11-19 00:10:45.291998] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:26:38.639 [2024-11-19 00:10:45.292013] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:26:38.639 [2024-11-19 00:10:45.292030] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:26:38.639 [2024-11-19 00:10:45.292061] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:26:38.639 [2024-11-19 00:10:45.292079] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:26:40.513 2519.17 IOPS, 9.84 MiB/s [2024-11-19T00:10:47.464Z] 2159.29 IOPS, 8.43 MiB/s [2024-11-19T00:10:47.464Z] [2024-11-19 00:10:47.292184] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:26:40.772 [2024-11-19 00:10:47.292268] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:26:40.772 [2024-11-19 00:10:47.292300] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:26:40.772 [2024-11-19 00:10:47.292317] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] already in failed state 00:26:40.772 [2024-11-19 00:10:47.292358] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:26:41.710 1889.38 IOPS, 7.38 MiB/s 00:26:41.710 Latency(us) 00:26:41.710 [2024-11-19T00:10:48.402Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:41.710 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:26:41.710 NVMe0n1 : 8.10 1865.71 7.29 15.80 0.00 68126.46 8638.84 7046430.72 00:26:41.710 [2024-11-19T00:10:48.402Z] =================================================================================================================== 00:26:41.710 [2024-11-19T00:10:48.402Z] Total : 1865.71 7.29 15.80 0.00 68126.46 8638.84 7046430.72 00:26:41.710 { 00:26:41.710 "results": [ 00:26:41.710 { 00:26:41.710 "job": "NVMe0n1", 00:26:41.710 "core_mask": "0x4", 00:26:41.710 "workload": "randread", 00:26:41.710 "status": "finished", 00:26:41.710 "queue_depth": 128, 00:26:41.710 "io_size": 4096, 00:26:41.710 "runtime": 8.101476, 00:26:41.710 "iops": 1865.7094090015203, 00:26:41.710 "mibps": 7.287927378912189, 00:26:41.710 "io_failed": 128, 00:26:41.710 "io_timeout": 0, 00:26:41.710 "avg_latency_us": 68126.46075253618, 00:26:41.710 "min_latency_us": 8638.836363636363, 00:26:41.710 "max_latency_us": 7046430.72 00:26:41.710 } 00:26:41.710 ], 00:26:41.710 "core_count": 1 00:26:41.710 } 00:26:41.710 00:10:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:26:41.710 Attaching 5 probes... 00:26:41.710 1351.015270: reset bdev controller NVMe0 00:26:41.710 1351.132514: reconnect bdev controller NVMe0 00:26:41.710 3366.547882: reconnect delay bdev controller NVMe0 00:26:41.710 3366.583947: reconnect bdev controller NVMe0 00:26:41.710 5367.003265: reconnect delay bdev controller NVMe0 00:26:41.710 5367.038878: reconnect bdev controller NVMe0 00:26:41.710 7367.444599: reconnect delay bdev controller NVMe0 00:26:41.710 7367.495644: reconnect bdev controller NVMe0 00:26:41.710 00:10:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:26:41.710 00:10:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:26:41.710 00:10:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@136 -- # kill 88600 00:26:41.710 00:10:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:26:41.710 00:10:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 88583 00:26:41.710 00:10:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 88583 ']' 00:26:41.710 00:10:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 88583 00:26:41.710 00:10:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:26:41.710 00:10:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:41.710 00:10:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88583 00:26:41.710 killing process with pid 88583 00:26:41.710 Received shutdown signal, test time was about 8.174455 seconds 00:26:41.710 00:26:41.710 Latency(us) 00:26:41.710 [2024-11-19T00:10:48.402Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:41.710 [2024-11-19T00:10:48.402Z] =================================================================================================================== 00:26:41.710 [2024-11-19T00:10:48.402Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:41.710 00:10:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:26:41.710 00:10:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:26:41.710 00:10:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88583' 00:26:41.710 00:10:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 88583 00:26:41.710 00:10:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 88583 00:26:42.676 00:10:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:42.936 00:10:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:26:42.936 00:10:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:26:42.936 00:10:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:42.936 00:10:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@121 -- # sync 00:26:42.936 00:10:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:42.936 00:10:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@124 -- # set +e 00:26:42.936 00:10:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:42.936 00:10:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:42.936 rmmod nvme_tcp 00:26:42.936 rmmod nvme_fabrics 00:26:42.936 rmmod nvme_keyring 00:26:42.936 00:10:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:42.936 00:10:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@128 -- # set -e 00:26:42.936 00:10:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@129 -- # return 0 00:26:42.936 00:10:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@517 -- # '[' -n 88135 ']' 00:26:42.936 00:10:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@518 -- # killprocess 88135 00:26:42.936 00:10:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 88135 ']' 00:26:42.936 00:10:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 88135 00:26:42.936 00:10:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:26:42.936 00:10:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:42.936 00:10:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88135 00:26:42.936 killing process with pid 88135 00:26:42.936 00:10:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:42.936 00:10:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:42.936 00:10:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88135' 00:26:42.936 00:10:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 88135 00:26:42.936 00:10:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 88135 00:26:43.872 00:10:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:43.872 00:10:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:43.872 00:10:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:43.872 00:10:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@297 -- # iptr 00:26:43.872 00:10:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:43.872 00:10:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-save 00:26:43.872 00:10:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:26:43.872 00:10:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:43.872 00:10:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:26:43.872 00:10:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:26:43.872 00:10:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:26:43.872 00:10:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:26:44.131 00:10:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:26:44.131 00:10:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:26:44.131 00:10:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:26:44.131 00:10:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:26:44.131 00:10:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:26:44.131 00:10:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:26:44.131 00:10:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:26:44.131 00:10:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:26:44.131 00:10:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:44.131 00:10:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:44.131 00:10:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@246 -- # remove_spdk_ns 00:26:44.131 00:10:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:44.131 00:10:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:44.131 00:10:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:44.131 00:10:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@300 -- # return 0 00:26:44.131 00:26:44.131 real 0m50.146s 00:26:44.131 user 2m26.082s 00:26:44.131 sys 0m5.453s 00:26:44.131 00:10:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:44.131 00:10:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:44.131 ************************************ 00:26:44.131 END TEST nvmf_timeout 00:26:44.131 ************************************ 00:26:44.131 00:10:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ virt == phy ]] 00:26:44.131 00:10:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:26:44.131 00:26:44.131 real 6m23.291s 00:26:44.131 user 17m45.207s 00:26:44.131 sys 1m17.088s 00:26:44.131 00:10:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:44.131 ************************************ 00:26:44.131 00:10:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.131 END TEST nvmf_host 00:26:44.131 ************************************ 00:26:44.391 00:10:50 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:26:44.391 00:10:50 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 1 -eq 0 ]] 00:26:44.391 00:26:44.391 real 17m0.563s 00:26:44.391 user 44m11.507s 00:26:44.391 sys 4m7.127s 00:26:44.391 00:10:50 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:44.391 ************************************ 00:26:44.391 END TEST nvmf_tcp 00:26:44.391 ************************************ 00:26:44.391 00:10:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:44.391 00:10:50 -- spdk/autotest.sh@285 -- # [[ 1 -eq 0 ]] 00:26:44.391 00:10:50 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:26:44.391 00:10:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:44.391 00:10:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:44.391 00:10:50 -- common/autotest_common.sh@10 -- # set +x 00:26:44.391 ************************************ 00:26:44.391 START TEST nvmf_dif 00:26:44.391 ************************************ 00:26:44.391 00:10:50 nvmf_dif -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:26:44.391 * Looking for test storage... 00:26:44.391 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:26:44.391 00:10:50 nvmf_dif -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:44.391 00:10:50 nvmf_dif -- common/autotest_common.sh@1693 -- # lcov --version 00:26:44.391 00:10:50 nvmf_dif -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:44.391 00:10:51 nvmf_dif -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:44.391 00:10:51 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:44.391 00:10:51 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:44.391 00:10:51 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:44.391 00:10:51 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:26:44.391 00:10:51 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:26:44.391 00:10:51 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:26:44.391 00:10:51 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:26:44.391 00:10:51 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:26:44.391 00:10:51 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:26:44.391 00:10:51 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:26:44.391 00:10:51 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:44.391 00:10:51 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:26:44.391 00:10:51 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:26:44.391 00:10:51 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:44.391 00:10:51 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:44.391 00:10:51 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:26:44.391 00:10:51 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:26:44.391 00:10:51 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:44.391 00:10:51 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:26:44.391 00:10:51 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:26:44.391 00:10:51 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:26:44.391 00:10:51 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:26:44.391 00:10:51 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:44.391 00:10:51 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:26:44.391 00:10:51 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:26:44.391 00:10:51 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:44.391 00:10:51 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:44.391 00:10:51 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:26:44.391 00:10:51 nvmf_dif -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:44.391 00:10:51 nvmf_dif -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:44.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:44.391 --rc genhtml_branch_coverage=1 00:26:44.391 --rc genhtml_function_coverage=1 00:26:44.391 --rc genhtml_legend=1 00:26:44.391 --rc geninfo_all_blocks=1 00:26:44.391 --rc geninfo_unexecuted_blocks=1 00:26:44.391 00:26:44.391 ' 00:26:44.391 00:10:51 nvmf_dif -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:44.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:44.391 --rc genhtml_branch_coverage=1 00:26:44.391 --rc genhtml_function_coverage=1 00:26:44.391 --rc genhtml_legend=1 00:26:44.391 --rc geninfo_all_blocks=1 00:26:44.392 --rc geninfo_unexecuted_blocks=1 00:26:44.392 00:26:44.392 ' 00:26:44.392 00:10:51 nvmf_dif -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:44.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:44.392 --rc genhtml_branch_coverage=1 00:26:44.392 --rc genhtml_function_coverage=1 00:26:44.392 --rc genhtml_legend=1 00:26:44.392 --rc geninfo_all_blocks=1 00:26:44.392 --rc geninfo_unexecuted_blocks=1 00:26:44.392 00:26:44.392 ' 00:26:44.392 00:10:51 nvmf_dif -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:44.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:44.392 --rc genhtml_branch_coverage=1 00:26:44.392 --rc genhtml_function_coverage=1 00:26:44.392 --rc genhtml_legend=1 00:26:44.392 --rc geninfo_all_blocks=1 00:26:44.392 --rc geninfo_unexecuted_blocks=1 00:26:44.392 00:26:44.392 ' 00:26:44.392 00:10:51 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:44.392 00:10:51 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:26:44.392 00:10:51 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:44.392 00:10:51 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:44.392 00:10:51 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:44.392 00:10:51 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:44.392 00:10:51 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:44.392 00:10:51 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:44.392 00:10:51 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:44.392 00:10:51 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:44.392 00:10:51 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:44.392 00:10:51 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:44.392 00:10:51 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:26:44.392 00:10:51 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:26:44.392 00:10:51 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:44.392 00:10:51 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:44.392 00:10:51 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:44.392 00:10:51 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:44.392 00:10:51 nvmf_dif -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:44.392 00:10:51 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:26:44.392 00:10:51 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:44.392 00:10:51 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:44.392 00:10:51 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:44.392 00:10:51 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:44.392 00:10:51 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:44.392 00:10:51 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:44.392 00:10:51 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:26:44.392 00:10:51 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:44.392 00:10:51 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:26:44.392 00:10:51 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:44.392 00:10:51 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:44.392 00:10:51 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:44.392 00:10:51 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:44.392 00:10:51 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:44.392 00:10:51 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:44.392 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:44.392 00:10:51 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:44.392 00:10:51 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:44.392 00:10:51 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:44.392 00:10:51 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:26:44.392 00:10:51 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:26:44.392 00:10:51 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:26:44.392 00:10:51 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:26:44.392 00:10:51 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:26:44.392 00:10:51 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:44.392 00:10:51 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:44.392 00:10:51 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:44.392 00:10:51 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:44.392 00:10:51 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:44.392 00:10:51 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:44.392 00:10:51 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:44.392 00:10:51 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:44.392 00:10:51 nvmf_dif -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:26:44.392 00:10:51 nvmf_dif -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:26:44.392 00:10:51 nvmf_dif -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:26:44.392 00:10:51 nvmf_dif -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:26:44.392 00:10:51 nvmf_dif -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:26:44.392 00:10:51 nvmf_dif -- nvmf/common.sh@460 -- # nvmf_veth_init 00:26:44.392 00:10:51 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:44.392 00:10:51 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:26:44.392 00:10:51 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:26:44.392 00:10:51 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:26:44.392 00:10:51 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:44.392 00:10:51 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:26:44.392 00:10:51 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:44.392 00:10:51 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:26:44.392 00:10:51 nvmf_dif -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:44.392 00:10:51 nvmf_dif -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:26:44.392 00:10:51 nvmf_dif -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:44.392 00:10:51 nvmf_dif -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:44.392 00:10:51 nvmf_dif -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:44.392 00:10:51 nvmf_dif -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:44.392 00:10:51 nvmf_dif -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:44.392 00:10:51 nvmf_dif -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:44.392 00:10:51 nvmf_dif -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:26:44.651 Cannot find device "nvmf_init_br" 00:26:44.651 00:10:51 nvmf_dif -- nvmf/common.sh@162 -- # true 00:26:44.651 00:10:51 nvmf_dif -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:26:44.651 Cannot find device "nvmf_init_br2" 00:26:44.651 00:10:51 nvmf_dif -- nvmf/common.sh@163 -- # true 00:26:44.651 00:10:51 nvmf_dif -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:26:44.651 Cannot find device "nvmf_tgt_br" 00:26:44.651 00:10:51 nvmf_dif -- nvmf/common.sh@164 -- # true 00:26:44.651 00:10:51 nvmf_dif -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:26:44.651 Cannot find device "nvmf_tgt_br2" 00:26:44.651 00:10:51 nvmf_dif -- nvmf/common.sh@165 -- # true 00:26:44.651 00:10:51 nvmf_dif -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:26:44.651 Cannot find device "nvmf_init_br" 00:26:44.651 00:10:51 nvmf_dif -- nvmf/common.sh@166 -- # true 00:26:44.651 00:10:51 nvmf_dif -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:26:44.651 Cannot find device "nvmf_init_br2" 00:26:44.651 00:10:51 nvmf_dif -- nvmf/common.sh@167 -- # true 00:26:44.651 00:10:51 nvmf_dif -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:26:44.651 Cannot find device "nvmf_tgt_br" 00:26:44.651 00:10:51 nvmf_dif -- nvmf/common.sh@168 -- # true 00:26:44.651 00:10:51 nvmf_dif -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:26:44.651 Cannot find device "nvmf_tgt_br2" 00:26:44.651 00:10:51 nvmf_dif -- nvmf/common.sh@169 -- # true 00:26:44.651 00:10:51 nvmf_dif -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:26:44.651 Cannot find device "nvmf_br" 00:26:44.651 00:10:51 nvmf_dif -- nvmf/common.sh@170 -- # true 00:26:44.651 00:10:51 nvmf_dif -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:26:44.651 Cannot find device "nvmf_init_if" 00:26:44.651 00:10:51 nvmf_dif -- nvmf/common.sh@171 -- # true 00:26:44.651 00:10:51 nvmf_dif -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:26:44.651 Cannot find device "nvmf_init_if2" 00:26:44.651 00:10:51 nvmf_dif -- nvmf/common.sh@172 -- # true 00:26:44.651 00:10:51 nvmf_dif -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:44.651 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:44.651 00:10:51 nvmf_dif -- nvmf/common.sh@173 -- # true 00:26:44.651 00:10:51 nvmf_dif -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:44.651 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:44.651 00:10:51 nvmf_dif -- nvmf/common.sh@174 -- # true 00:26:44.651 00:10:51 nvmf_dif -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:26:44.651 00:10:51 nvmf_dif -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:44.651 00:10:51 nvmf_dif -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:26:44.651 00:10:51 nvmf_dif -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:44.651 00:10:51 nvmf_dif -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:44.651 00:10:51 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:44.651 00:10:51 nvmf_dif -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:44.651 00:10:51 nvmf_dif -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:44.651 00:10:51 nvmf_dif -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:26:44.651 00:10:51 nvmf_dif -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:26:44.651 00:10:51 nvmf_dif -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:26:44.651 00:10:51 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:26:44.651 00:10:51 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:26:44.651 00:10:51 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:26:44.651 00:10:51 nvmf_dif -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:26:44.911 00:10:51 nvmf_dif -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:26:44.911 00:10:51 nvmf_dif -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:26:44.911 00:10:51 nvmf_dif -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:44.911 00:10:51 nvmf_dif -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:44.911 00:10:51 nvmf_dif -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:44.911 00:10:51 nvmf_dif -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:26:44.911 00:10:51 nvmf_dif -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:26:44.911 00:10:51 nvmf_dif -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:26:44.911 00:10:51 nvmf_dif -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:26:44.911 00:10:51 nvmf_dif -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:44.911 00:10:51 nvmf_dif -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:44.911 00:10:51 nvmf_dif -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:44.911 00:10:51 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:26:44.911 00:10:51 nvmf_dif -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:26:44.911 00:10:51 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:26:44.911 00:10:51 nvmf_dif -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:44.911 00:10:51 nvmf_dif -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:26:44.911 00:10:51 nvmf_dif -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:26:44.911 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:44.911 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.079 ms 00:26:44.911 00:26:44.911 --- 10.0.0.3 ping statistics --- 00:26:44.911 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:44.911 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:26:44.911 00:10:51 nvmf_dif -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:26:44.911 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:26:44.911 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.051 ms 00:26:44.911 00:26:44.911 --- 10.0.0.4 ping statistics --- 00:26:44.911 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:44.911 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:26:44.911 00:10:51 nvmf_dif -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:44.911 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:44.911 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:26:44.911 00:26:44.911 --- 10.0.0.1 ping statistics --- 00:26:44.911 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:44.911 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:26:44.911 00:10:51 nvmf_dif -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:26:44.911 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:44.911 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 00:26:44.911 00:26:44.911 --- 10.0.0.2 ping statistics --- 00:26:44.911 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:44.911 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:26:44.911 00:10:51 nvmf_dif -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:44.911 00:10:51 nvmf_dif -- nvmf/common.sh@461 -- # return 0 00:26:44.911 00:10:51 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:26:44.911 00:10:51 nvmf_dif -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:26:45.170 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:45.170 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:26:45.170 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:26:45.170 00:10:51 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:45.170 00:10:51 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:45.170 00:10:51 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:45.170 00:10:51 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:45.170 00:10:51 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:45.170 00:10:51 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:45.429 00:10:51 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:26:45.429 00:10:51 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:26:45.429 00:10:51 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:45.429 00:10:51 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:45.429 00:10:51 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:45.429 00:10:51 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=89148 00:26:45.430 00:10:51 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 89148 00:26:45.430 00:10:51 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 89148 ']' 00:26:45.430 00:10:51 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:45.430 00:10:51 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:26:45.430 00:10:51 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:45.430 00:10:51 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:45.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:45.430 00:10:51 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:45.430 00:10:51 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:45.430 [2024-11-19 00:10:52.007086] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:26:45.430 [2024-11-19 00:10:52.007251] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:45.689 [2024-11-19 00:10:52.196548] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:45.689 [2024-11-19 00:10:52.320626] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:45.689 [2024-11-19 00:10:52.320707] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:45.689 [2024-11-19 00:10:52.320732] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:45.689 [2024-11-19 00:10:52.320763] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:45.689 [2024-11-19 00:10:52.320780] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:45.689 [2024-11-19 00:10:52.322200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:45.948 [2024-11-19 00:10:52.504499] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:26:46.517 00:10:52 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:46.517 00:10:52 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:26:46.517 00:10:52 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:46.517 00:10:52 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:46.517 00:10:52 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:46.517 00:10:53 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:46.517 00:10:53 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:26:46.517 00:10:53 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:26:46.517 00:10:53 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.517 00:10:53 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:46.517 [2024-11-19 00:10:53.022324] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:46.517 00:10:53 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.517 00:10:53 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:26:46.517 00:10:53 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:46.517 00:10:53 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:46.517 00:10:53 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:46.517 ************************************ 00:26:46.517 START TEST fio_dif_1_default 00:26:46.517 ************************************ 00:26:46.517 00:10:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:26:46.517 00:10:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:26:46.517 00:10:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:26:46.517 00:10:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:26:46.517 00:10:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:26:46.517 00:10:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:26:46.517 00:10:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:26:46.517 00:10:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.518 00:10:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:46.518 bdev_null0 00:26:46.518 00:10:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.518 00:10:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:46.518 00:10:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.518 00:10:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:46.518 00:10:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.518 00:10:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:46.518 00:10:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.518 00:10:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:46.518 00:10:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.518 00:10:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:26:46.518 00:10:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.518 00:10:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:46.518 [2024-11-19 00:10:53.066540] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:26:46.518 00:10:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.518 00:10:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:26:46.518 00:10:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:26:46.518 00:10:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:26:46.518 00:10:53 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:26:46.518 00:10:53 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:26:46.518 00:10:53 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:46.518 00:10:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:46.518 00:10:53 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:46.518 { 00:26:46.518 "params": { 00:26:46.518 "name": "Nvme$subsystem", 00:26:46.518 "trtype": "$TEST_TRANSPORT", 00:26:46.518 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:46.518 "adrfam": "ipv4", 00:26:46.518 "trsvcid": "$NVMF_PORT", 00:26:46.518 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:46.518 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:46.518 "hdgst": ${hdgst:-false}, 00:26:46.518 "ddgst": ${ddgst:-false} 00:26:46.518 }, 00:26:46.518 "method": "bdev_nvme_attach_controller" 00:26:46.518 } 00:26:46.518 EOF 00:26:46.518 )") 00:26:46.518 00:10:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:46.518 00:10:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:26:46.518 00:10:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:46.518 00:10:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:26:46.518 00:10:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:46.518 00:10:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:26:46.518 00:10:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:26:46.518 00:10:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:26:46.518 00:10:53 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:26:46.518 00:10:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:26:46.518 00:10:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:26:46.518 00:10:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:26:46.518 00:10:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:46.518 00:10:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:26:46.518 00:10:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:26:46.518 00:10:53 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:26:46.518 00:10:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:26:46.518 00:10:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:26:46.518 00:10:53 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:26:46.518 00:10:53 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:26:46.518 "params": { 00:26:46.518 "name": "Nvme0", 00:26:46.518 "trtype": "tcp", 00:26:46.518 "traddr": "10.0.0.3", 00:26:46.518 "adrfam": "ipv4", 00:26:46.518 "trsvcid": "4420", 00:26:46.518 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:46.518 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:46.518 "hdgst": false, 00:26:46.518 "ddgst": false 00:26:46.518 }, 00:26:46.518 "method": "bdev_nvme_attach_controller" 00:26:46.518 }' 00:26:46.518 00:10:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:26:46.518 00:10:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:26:46.518 00:10:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1351 -- # break 00:26:46.518 00:10:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:46.518 00:10:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:46.777 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:26:46.777 fio-3.35 00:26:46.777 Starting 1 thread 00:26:59.022 00:26:59.022 filename0: (groupid=0, jobs=1): err= 0: pid=89211: Tue Nov 19 00:11:04 2024 00:26:59.022 read: IOPS=7818, BW=30.5MiB/s (32.0MB/s)(305MiB/10001msec) 00:26:59.022 slat (nsec): min=7102, max=70630, avg=10062.20, stdev=4511.38 00:26:59.022 clat (usec): min=405, max=1756, avg=480.92, stdev=46.08 00:26:59.022 lat (usec): min=412, max=1770, avg=490.98, stdev=47.32 00:26:59.022 clat percentiles (usec): 00:26:59.022 | 1.00th=[ 412], 5.00th=[ 420], 10.00th=[ 433], 20.00th=[ 445], 00:26:59.022 | 30.00th=[ 457], 40.00th=[ 465], 50.00th=[ 474], 60.00th=[ 482], 00:26:59.022 | 70.00th=[ 494], 80.00th=[ 510], 90.00th=[ 537], 95.00th=[ 570], 00:26:59.022 | 99.00th=[ 627], 99.50th=[ 652], 99.90th=[ 709], 99.95th=[ 734], 00:26:59.022 | 99.99th=[ 1106] 00:26:59.022 bw ( KiB/s): min=30048, max=32160, per=100.00%, avg=31277.58, stdev=622.74, samples=19 00:26:59.022 iops : min= 7512, max= 8040, avg=7819.37, stdev=155.70, samples=19 00:26:59.022 lat (usec) : 500=74.05%, 750=25.91%, 1000=0.03% 00:26:59.022 lat (msec) : 2=0.01% 00:26:59.022 cpu : usr=84.18%, sys=13.98%, ctx=17, majf=0, minf=1060 00:26:59.022 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:59.022 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:59.022 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:59.022 issued rwts: total=78196,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:59.022 latency : target=0, window=0, percentile=100.00%, depth=4 00:26:59.022 00:26:59.022 Run status group 0 (all jobs): 00:26:59.022 READ: bw=30.5MiB/s (32.0MB/s), 30.5MiB/s-30.5MiB/s (32.0MB/s-32.0MB/s), io=305MiB (320MB), run=10001-10001msec 00:26:59.022 ----------------------------------------------------- 00:26:59.022 Suppressions used: 00:26:59.022 count bytes template 00:26:59.022 1 8 /usr/src/fio/parse.c 00:26:59.022 1 8 libtcmalloc_minimal.so 00:26:59.022 1 904 libcrypto.so 00:26:59.022 ----------------------------------------------------- 00:26:59.022 00:26:59.022 00:11:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:26:59.022 00:11:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:26:59.022 00:11:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:26:59.022 00:11:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:59.022 00:11:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:26:59.022 00:11:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:59.022 00:11:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.022 00:11:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:59.022 00:11:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.022 00:11:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:59.022 00:11:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.022 00:11:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:59.022 00:11:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.022 00:26:59.022 real 0m12.200s 00:26:59.022 user 0m10.216s 00:26:59.022 sys 0m1.739s 00:26:59.022 00:11:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:59.022 ************************************ 00:26:59.022 00:11:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:59.022 END TEST fio_dif_1_default 00:26:59.022 ************************************ 00:26:59.022 00:11:05 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:26:59.022 00:11:05 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:59.022 00:11:05 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:59.022 00:11:05 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:59.022 ************************************ 00:26:59.022 START TEST fio_dif_1_multi_subsystems 00:26:59.022 ************************************ 00:26:59.022 00:11:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:26:59.022 00:11:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:26:59.022 00:11:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:26:59.022 00:11:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:26:59.022 00:11:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:26:59.022 00:11:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:26:59.022 00:11:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:26:59.022 00:11:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:26:59.022 00:11:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.022 00:11:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:59.022 bdev_null0 00:26:59.022 00:11:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.022 00:11:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:59.023 00:11:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.023 00:11:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:59.023 00:11:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.023 00:11:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:59.023 00:11:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.023 00:11:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:59.023 00:11:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.023 00:11:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:26:59.023 00:11:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.023 00:11:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:59.023 [2024-11-19 00:11:05.319665] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:26:59.023 00:11:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.023 00:11:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:26:59.023 00:11:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:26:59.023 00:11:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:26:59.023 00:11:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:26:59.023 00:11:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.023 00:11:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:59.023 bdev_null1 00:26:59.023 00:11:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.023 00:11:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:26:59.023 00:11:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.023 00:11:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:59.023 00:11:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.023 00:11:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:26:59.023 00:11:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.023 00:11:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:59.023 00:11:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.023 00:11:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:26:59.023 00:11:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.023 00:11:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:59.023 00:11:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.023 00:11:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:26:59.023 00:11:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:26:59.023 00:11:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:26:59.023 00:11:05 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:26:59.023 00:11:05 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:26:59.023 00:11:05 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:59.023 00:11:05 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:59.023 { 00:26:59.023 "params": { 00:26:59.023 "name": "Nvme$subsystem", 00:26:59.023 "trtype": "$TEST_TRANSPORT", 00:26:59.023 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:59.023 "adrfam": "ipv4", 00:26:59.023 "trsvcid": "$NVMF_PORT", 00:26:59.023 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:59.023 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:59.023 "hdgst": ${hdgst:-false}, 00:26:59.023 "ddgst": ${ddgst:-false} 00:26:59.023 }, 00:26:59.023 "method": "bdev_nvme_attach_controller" 00:26:59.023 } 00:26:59.023 EOF 00:26:59.023 )") 00:26:59.023 00:11:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:59.023 00:11:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:26:59.023 00:11:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:59.023 00:11:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:26:59.023 00:11:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:26:59.023 00:11:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:26:59.023 00:11:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:59.023 00:11:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:26:59.023 00:11:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:59.023 00:11:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:26:59.023 00:11:05 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:26:59.023 00:11:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:26:59.023 00:11:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:26:59.023 00:11:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:26:59.023 00:11:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:59.023 00:11:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:26:59.023 00:11:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:26:59.023 00:11:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:26:59.023 00:11:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:26:59.023 00:11:05 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:59.023 00:11:05 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:59.023 { 00:26:59.023 "params": { 00:26:59.023 "name": "Nvme$subsystem", 00:26:59.023 "trtype": "$TEST_TRANSPORT", 00:26:59.023 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:59.023 "adrfam": "ipv4", 00:26:59.023 "trsvcid": "$NVMF_PORT", 00:26:59.023 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:59.023 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:59.023 "hdgst": ${hdgst:-false}, 00:26:59.023 "ddgst": ${ddgst:-false} 00:26:59.023 }, 00:26:59.023 "method": "bdev_nvme_attach_controller" 00:26:59.023 } 00:26:59.023 EOF 00:26:59.023 )") 00:26:59.023 00:11:05 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:26:59.023 00:11:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:26:59.023 00:11:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:26:59.023 00:11:05 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:26:59.023 00:11:05 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:26:59.023 00:11:05 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:26:59.023 "params": { 00:26:59.023 "name": "Nvme0", 00:26:59.023 "trtype": "tcp", 00:26:59.023 "traddr": "10.0.0.3", 00:26:59.023 "adrfam": "ipv4", 00:26:59.023 "trsvcid": "4420", 00:26:59.023 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:59.023 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:59.023 "hdgst": false, 00:26:59.023 "ddgst": false 00:26:59.023 }, 00:26:59.023 "method": "bdev_nvme_attach_controller" 00:26:59.023 },{ 00:26:59.023 "params": { 00:26:59.023 "name": "Nvme1", 00:26:59.023 "trtype": "tcp", 00:26:59.023 "traddr": "10.0.0.3", 00:26:59.023 "adrfam": "ipv4", 00:26:59.023 "trsvcid": "4420", 00:26:59.023 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:59.023 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:59.023 "hdgst": false, 00:26:59.023 "ddgst": false 00:26:59.023 }, 00:26:59.023 "method": "bdev_nvme_attach_controller" 00:26:59.023 }' 00:26:59.023 00:11:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:26:59.024 00:11:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:26:59.024 00:11:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1351 -- # break 00:26:59.024 00:11:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:59.024 00:11:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:59.024 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:26:59.024 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:26:59.024 fio-3.35 00:26:59.024 Starting 2 threads 00:27:11.235 00:27:11.235 filename0: (groupid=0, jobs=1): err= 0: pid=89372: Tue Nov 19 00:11:16 2024 00:27:11.235 read: IOPS=4279, BW=16.7MiB/s (17.5MB/s)(167MiB/10001msec) 00:27:11.235 slat (nsec): min=5193, max=62295, avg=15223.96, stdev=4701.20 00:27:11.235 clat (usec): min=510, max=1390, avg=892.85, stdev=61.60 00:27:11.235 lat (usec): min=518, max=1409, avg=908.07, stdev=62.57 00:27:11.235 clat percentiles (usec): 00:27:11.235 | 1.00th=[ 766], 5.00th=[ 807], 10.00th=[ 824], 20.00th=[ 848], 00:27:11.235 | 30.00th=[ 865], 40.00th=[ 873], 50.00th=[ 889], 60.00th=[ 898], 00:27:11.235 | 70.00th=[ 914], 80.00th=[ 938], 90.00th=[ 971], 95.00th=[ 1004], 00:27:11.235 | 99.00th=[ 1090], 99.50th=[ 1123], 99.90th=[ 1172], 99.95th=[ 1205], 00:27:11.235 | 99.99th=[ 1254] 00:27:11.235 bw ( KiB/s): min=16544, max=17472, per=50.02%, avg=17121.68, stdev=242.41, samples=19 00:27:11.235 iops : min= 4136, max= 4368, avg=4280.42, stdev=60.60, samples=19 00:27:11.235 lat (usec) : 750=0.42%, 1000=94.13% 00:27:11.235 lat (msec) : 2=5.45% 00:27:11.235 cpu : usr=90.72%, sys=7.92%, ctx=20, majf=0, minf=1075 00:27:11.235 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:11.235 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:11.235 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:11.235 issued rwts: total=42796,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:11.235 latency : target=0, window=0, percentile=100.00%, depth=4 00:27:11.235 filename1: (groupid=0, jobs=1): err= 0: pid=89373: Tue Nov 19 00:11:16 2024 00:27:11.235 read: IOPS=4277, BW=16.7MiB/s (17.5MB/s)(167MiB/10001msec) 00:27:11.235 slat (usec): min=7, max=382, avg=15.62, stdev= 6.33 00:27:11.235 clat (usec): min=505, max=1718, avg=890.78, stdev=59.16 00:27:11.235 lat (usec): min=513, max=1755, avg=906.40, stdev=60.14 00:27:11.235 clat percentiles (usec): 00:27:11.235 | 1.00th=[ 799], 5.00th=[ 816], 10.00th=[ 832], 20.00th=[ 848], 00:27:11.235 | 30.00th=[ 857], 40.00th=[ 873], 50.00th=[ 881], 60.00th=[ 898], 00:27:11.235 | 70.00th=[ 906], 80.00th=[ 930], 90.00th=[ 963], 95.00th=[ 996], 00:27:11.235 | 99.00th=[ 1090], 99.50th=[ 1123], 99.90th=[ 1221], 99.95th=[ 1401], 00:27:11.235 | 99.99th=[ 1532] 00:27:11.235 bw ( KiB/s): min=16512, max=17472, per=50.00%, avg=17113.26, stdev=251.42, samples=19 00:27:11.235 iops : min= 4128, max= 4368, avg=4278.32, stdev=62.86, samples=19 00:27:11.235 lat (usec) : 750=0.09%, 1000=95.15% 00:27:11.235 lat (msec) : 2=4.76% 00:27:11.235 cpu : usr=89.79%, sys=8.47%, ctx=86, majf=0, minf=1074 00:27:11.235 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:11.235 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:11.235 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:11.235 issued rwts: total=42780,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:11.235 latency : target=0, window=0, percentile=100.00%, depth=4 00:27:11.235 00:27:11.235 Run status group 0 (all jobs): 00:27:11.235 READ: bw=33.4MiB/s (35.0MB/s), 16.7MiB/s-16.7MiB/s (17.5MB/s-17.5MB/s), io=334MiB (351MB), run=10001-10001msec 00:27:11.235 ----------------------------------------------------- 00:27:11.235 Suppressions used: 00:27:11.235 count bytes template 00:27:11.235 2 16 /usr/src/fio/parse.c 00:27:11.235 1 8 libtcmalloc_minimal.so 00:27:11.235 1 904 libcrypto.so 00:27:11.235 ----------------------------------------------------- 00:27:11.235 00:27:11.235 00:11:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:27:11.235 00:11:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:27:11.235 00:11:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:27:11.235 00:11:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:11.235 00:11:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:27:11.235 00:11:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:11.235 00:11:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.235 00:11:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:11.235 00:11:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.235 00:11:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:11.235 00:11:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.235 00:11:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:11.235 00:11:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.235 00:11:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:27:11.235 00:11:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:27:11.235 00:11:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:27:11.235 00:11:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:11.235 00:11:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.235 00:11:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:11.235 00:11:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.235 00:11:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:27:11.235 00:11:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.235 00:11:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:11.235 00:11:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.235 00:27:11.235 real 0m12.440s 00:27:11.235 user 0m20.035s 00:27:11.235 sys 0m2.038s 00:27:11.235 00:11:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:11.235 ************************************ 00:27:11.235 END TEST fio_dif_1_multi_subsystems 00:27:11.235 00:11:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:11.235 ************************************ 00:27:11.235 00:11:17 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:27:11.235 00:11:17 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:11.235 00:11:17 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:11.235 00:11:17 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:11.235 ************************************ 00:27:11.235 START TEST fio_dif_rand_params 00:27:11.235 ************************************ 00:27:11.235 00:11:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:27:11.235 00:11:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:27:11.235 00:11:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:27:11.235 00:11:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:27:11.235 00:11:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:27:11.236 00:11:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:27:11.236 00:11:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:27:11.236 00:11:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:27:11.236 00:11:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:27:11.236 00:11:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:27:11.236 00:11:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:11.236 00:11:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:27:11.236 00:11:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:27:11.236 00:11:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:27:11.236 00:11:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.236 00:11:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:11.236 bdev_null0 00:27:11.236 00:11:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.236 00:11:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:11.236 00:11:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.236 00:11:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:11.236 00:11:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.236 00:11:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:11.236 00:11:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.236 00:11:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:11.236 00:11:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.236 00:11:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:27:11.236 00:11:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.236 00:11:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:11.236 [2024-11-19 00:11:17.813923] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:27:11.236 00:11:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.236 00:11:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:27:11.236 00:11:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:27:11.236 00:11:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:27:11.236 00:11:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:27:11.236 00:11:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:27:11.236 00:11:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:11.236 00:11:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:27:11.236 00:11:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:11.236 00:11:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:11.236 00:11:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:11.236 { 00:27:11.236 "params": { 00:27:11.236 "name": "Nvme$subsystem", 00:27:11.236 "trtype": "$TEST_TRANSPORT", 00:27:11.236 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:11.236 "adrfam": "ipv4", 00:27:11.236 "trsvcid": "$NVMF_PORT", 00:27:11.236 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:11.236 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:11.236 "hdgst": ${hdgst:-false}, 00:27:11.236 "ddgst": ${ddgst:-false} 00:27:11.236 }, 00:27:11.236 "method": "bdev_nvme_attach_controller" 00:27:11.236 } 00:27:11.236 EOF 00:27:11.236 )") 00:27:11.236 00:11:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:27:11.236 00:11:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:27:11.236 00:11:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:27:11.236 00:11:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:11.236 00:11:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:27:11.236 00:11:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:11.236 00:11:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:27:11.236 00:11:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:27:11.236 00:11:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:27:11.236 00:11:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:27:11.236 00:11:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:11.236 00:11:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:27:11.236 00:11:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:11.236 00:11:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:27:11.236 00:11:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:27:11.236 00:11:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:27:11.236 00:11:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:27:11.236 00:11:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:11.236 "params": { 00:27:11.236 "name": "Nvme0", 00:27:11.236 "trtype": "tcp", 00:27:11.236 "traddr": "10.0.0.3", 00:27:11.236 "adrfam": "ipv4", 00:27:11.236 "trsvcid": "4420", 00:27:11.236 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:11.236 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:11.236 "hdgst": false, 00:27:11.236 "ddgst": false 00:27:11.236 }, 00:27:11.236 "method": "bdev_nvme_attach_controller" 00:27:11.236 }' 00:27:11.236 00:11:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:27:11.236 00:11:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:27:11.236 00:11:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # break 00:27:11.236 00:11:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:27:11.236 00:11:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:11.495 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:27:11.495 ... 00:27:11.495 fio-3.35 00:27:11.495 Starting 3 threads 00:27:18.055 00:27:18.055 filename0: (groupid=0, jobs=1): err= 0: pid=89533: Tue Nov 19 00:11:23 2024 00:27:18.055 read: IOPS=237, BW=29.6MiB/s (31.1MB/s)(149MiB/5011msec) 00:27:18.055 slat (nsec): min=5943, max=69691, avg=17985.53, stdev=6163.96 00:27:18.055 clat (usec): min=12056, max=17607, avg=12612.48, stdev=552.53 00:27:18.055 lat (usec): min=12069, max=17637, avg=12630.47, stdev=552.90 00:27:18.055 clat percentiles (usec): 00:27:18.055 | 1.00th=[12125], 5.00th=[12125], 10.00th=[12256], 20.00th=[12256], 00:27:18.055 | 30.00th=[12256], 40.00th=[12387], 50.00th=[12387], 60.00th=[12518], 00:27:18.055 | 70.00th=[12649], 80.00th=[12911], 90.00th=[13304], 95.00th=[13566], 00:27:18.055 | 99.00th=[14484], 99.50th=[14746], 99.90th=[17695], 99.95th=[17695], 00:27:18.055 | 99.99th=[17695] 00:27:18.055 bw ( KiB/s): min=29952, max=31488, per=33.32%, avg=30336.00, stdev=543.06, samples=10 00:27:18.055 iops : min= 234, max= 246, avg=237.00, stdev= 4.24, samples=10 00:27:18.055 lat (msec) : 20=100.00% 00:27:18.055 cpu : usr=92.53%, sys=6.91%, ctx=46, majf=0, minf=1075 00:27:18.055 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:18.055 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:18.055 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:18.055 issued rwts: total=1188,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:18.055 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:18.055 filename0: (groupid=0, jobs=1): err= 0: pid=89534: Tue Nov 19 00:11:23 2024 00:27:18.055 read: IOPS=237, BW=29.6MiB/s (31.1MB/s)(149MiB/5010msec) 00:27:18.056 slat (usec): min=5, max=158, avg=18.70, stdev= 7.73 00:27:18.056 clat (usec): min=12040, max=16447, avg=12606.05, stdev=526.11 00:27:18.056 lat (usec): min=12054, max=16470, avg=12624.75, stdev=526.71 00:27:18.056 clat percentiles (usec): 00:27:18.056 | 1.00th=[12125], 5.00th=[12125], 10.00th=[12256], 20.00th=[12256], 00:27:18.056 | 30.00th=[12256], 40.00th=[12387], 50.00th=[12387], 60.00th=[12518], 00:27:18.056 | 70.00th=[12649], 80.00th=[12911], 90.00th=[13304], 95.00th=[13566], 00:27:18.056 | 99.00th=[14484], 99.50th=[14746], 99.90th=[16450], 99.95th=[16450], 00:27:18.056 | 99.99th=[16450] 00:27:18.056 bw ( KiB/s): min=29952, max=31488, per=33.32%, avg=30336.00, stdev=543.06, samples=10 00:27:18.056 iops : min= 234, max= 246, avg=237.00, stdev= 4.24, samples=10 00:27:18.056 lat (msec) : 20=100.00% 00:27:18.056 cpu : usr=91.97%, sys=7.03%, ctx=80, majf=0, minf=1074 00:27:18.056 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:18.056 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:18.056 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:18.056 issued rwts: total=1188,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:18.056 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:18.056 filename0: (groupid=0, jobs=1): err= 0: pid=89535: Tue Nov 19 00:11:23 2024 00:27:18.056 read: IOPS=237, BW=29.7MiB/s (31.1MB/s)(149MiB/5008msec) 00:27:18.056 slat (nsec): min=5388, max=75803, avg=18882.28, stdev=6934.89 00:27:18.056 clat (usec): min=12049, max=14863, avg=12601.56, stdev=501.15 00:27:18.056 lat (usec): min=12063, max=14885, avg=12620.44, stdev=502.17 00:27:18.056 clat percentiles (usec): 00:27:18.056 | 1.00th=[12125], 5.00th=[12125], 10.00th=[12256], 20.00th=[12256], 00:27:18.056 | 30.00th=[12256], 40.00th=[12387], 50.00th=[12387], 60.00th=[12518], 00:27:18.056 | 70.00th=[12649], 80.00th=[12911], 90.00th=[13304], 95.00th=[13566], 00:27:18.056 | 99.00th=[14484], 99.50th=[14615], 99.90th=[14877], 99.95th=[14877], 00:27:18.056 | 99.99th=[14877] 00:27:18.056 bw ( KiB/s): min=29952, max=31488, per=33.33%, avg=30342.00, stdev=538.66, samples=10 00:27:18.056 iops : min= 234, max= 246, avg=237.00, stdev= 4.24, samples=10 00:27:18.056 lat (msec) : 20=100.00% 00:27:18.056 cpu : usr=92.51%, sys=6.89%, ctx=42, majf=0, minf=1073 00:27:18.056 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:18.056 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:18.056 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:18.056 issued rwts: total=1188,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:18.056 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:18.056 00:27:18.056 Run status group 0 (all jobs): 00:27:18.056 READ: bw=88.9MiB/s (93.2MB/s), 29.6MiB/s-29.7MiB/s (31.1MB/s-31.1MB/s), io=446MiB (467MB), run=5008-5011msec 00:27:18.314 ----------------------------------------------------- 00:27:18.314 Suppressions used: 00:27:18.314 count bytes template 00:27:18.314 5 44 /usr/src/fio/parse.c 00:27:18.314 1 8 libtcmalloc_minimal.so 00:27:18.314 1 904 libcrypto.so 00:27:18.314 ----------------------------------------------------- 00:27:18.314 00:27:18.314 00:11:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:27:18.314 00:11:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:27:18.314 00:11:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:27:18.314 00:11:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:18.314 00:11:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:27:18.314 00:11:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:18.314 00:11:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.314 00:11:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:18.314 00:11:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.314 00:11:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:18.314 00:11:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.314 00:11:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:18.314 00:11:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.314 00:11:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:27:18.314 00:11:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:27:18.314 00:11:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:27:18.314 00:11:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:27:18.314 00:11:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:27:18.314 00:11:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:27:18.314 00:11:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:27:18.314 00:11:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:27:18.314 00:11:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:18.314 00:11:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:27:18.314 00:11:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:27:18.314 00:11:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:27:18.314 00:11:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.314 00:11:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:18.314 bdev_null0 00:27:18.314 00:11:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.314 00:11:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:18.315 00:11:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.315 00:11:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:18.315 00:11:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.315 00:11:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:18.315 00:11:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.315 00:11:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:18.315 00:11:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.315 00:11:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:27:18.315 00:11:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.315 00:11:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:18.315 [2024-11-19 00:11:25.002053] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:27:18.574 00:11:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.574 00:11:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:18.574 00:11:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:27:18.574 00:11:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:27:18.574 00:11:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:27:18.574 00:11:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.574 00:11:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:18.574 bdev_null1 00:27:18.574 00:11:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.574 00:11:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:27:18.574 00:11:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.574 00:11:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:18.574 00:11:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.574 00:11:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:27:18.574 00:11:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.574 00:11:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:18.574 00:11:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.574 00:11:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:27:18.574 00:11:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.574 00:11:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:18.574 00:11:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.574 00:11:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:18.574 00:11:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:27:18.574 00:11:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:27:18.574 00:11:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:27:18.574 00:11:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.574 00:11:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:18.574 bdev_null2 00:27:18.574 00:11:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.574 00:11:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:27:18.574 00:11:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.574 00:11:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:18.574 00:11:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.574 00:11:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:27:18.574 00:11:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.574 00:11:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:18.574 00:11:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.574 00:11:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:27:18.574 00:11:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.574 00:11:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:18.575 00:11:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.575 00:11:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:27:18.575 00:11:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:27:18.575 00:11:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:27:18.575 00:11:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:27:18.575 00:11:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:27:18.575 00:11:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:18.575 00:11:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:18.575 { 00:27:18.575 "params": { 00:27:18.575 "name": "Nvme$subsystem", 00:27:18.575 "trtype": "$TEST_TRANSPORT", 00:27:18.575 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:18.575 "adrfam": "ipv4", 00:27:18.575 "trsvcid": "$NVMF_PORT", 00:27:18.575 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:18.575 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:18.575 "hdgst": ${hdgst:-false}, 00:27:18.575 "ddgst": ${ddgst:-false} 00:27:18.575 }, 00:27:18.575 "method": "bdev_nvme_attach_controller" 00:27:18.575 } 00:27:18.575 EOF 00:27:18.575 )") 00:27:18.575 00:11:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:18.575 00:11:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:18.575 00:11:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:27:18.575 00:11:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:27:18.575 00:11:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:27:18.575 00:11:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:27:18.575 00:11:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:18.575 00:11:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:27:18.575 00:11:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:18.575 00:11:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:27:18.575 00:11:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:27:18.575 00:11:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:27:18.575 00:11:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:27:18.575 00:11:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:18.575 00:11:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:27:18.575 00:11:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:27:18.575 00:11:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:18.575 00:11:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:27:18.575 00:11:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:27:18.575 00:11:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:18.575 00:11:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:18.575 { 00:27:18.575 "params": { 00:27:18.575 "name": "Nvme$subsystem", 00:27:18.575 "trtype": "$TEST_TRANSPORT", 00:27:18.575 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:18.575 "adrfam": "ipv4", 00:27:18.575 "trsvcid": "$NVMF_PORT", 00:27:18.575 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:18.575 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:18.575 "hdgst": ${hdgst:-false}, 00:27:18.575 "ddgst": ${ddgst:-false} 00:27:18.575 }, 00:27:18.575 "method": "bdev_nvme_attach_controller" 00:27:18.575 } 00:27:18.575 EOF 00:27:18.575 )") 00:27:18.575 00:11:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:27:18.575 00:11:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:18.575 00:11:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:27:18.575 00:11:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:27:18.575 00:11:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:27:18.575 00:11:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:18.575 00:11:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:18.575 00:11:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:18.575 { 00:27:18.575 "params": { 00:27:18.575 "name": "Nvme$subsystem", 00:27:18.575 "trtype": "$TEST_TRANSPORT", 00:27:18.575 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:18.575 "adrfam": "ipv4", 00:27:18.575 "trsvcid": "$NVMF_PORT", 00:27:18.575 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:18.575 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:18.575 "hdgst": ${hdgst:-false}, 00:27:18.575 "ddgst": ${ddgst:-false} 00:27:18.575 }, 00:27:18.575 "method": "bdev_nvme_attach_controller" 00:27:18.575 } 00:27:18.575 EOF 00:27:18.575 )") 00:27:18.575 00:11:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:27:18.575 00:11:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:27:18.575 00:11:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:27:18.575 00:11:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:18.575 "params": { 00:27:18.575 "name": "Nvme0", 00:27:18.575 "trtype": "tcp", 00:27:18.575 "traddr": "10.0.0.3", 00:27:18.575 "adrfam": "ipv4", 00:27:18.575 "trsvcid": "4420", 00:27:18.575 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:18.575 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:18.575 "hdgst": false, 00:27:18.575 "ddgst": false 00:27:18.575 }, 00:27:18.575 "method": "bdev_nvme_attach_controller" 00:27:18.575 },{ 00:27:18.575 "params": { 00:27:18.575 "name": "Nvme1", 00:27:18.575 "trtype": "tcp", 00:27:18.575 "traddr": "10.0.0.3", 00:27:18.575 "adrfam": "ipv4", 00:27:18.575 "trsvcid": "4420", 00:27:18.575 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:18.575 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:18.575 "hdgst": false, 00:27:18.575 "ddgst": false 00:27:18.575 }, 00:27:18.575 "method": "bdev_nvme_attach_controller" 00:27:18.575 },{ 00:27:18.575 "params": { 00:27:18.575 "name": "Nvme2", 00:27:18.575 "trtype": "tcp", 00:27:18.575 "traddr": "10.0.0.3", 00:27:18.575 "adrfam": "ipv4", 00:27:18.575 "trsvcid": "4420", 00:27:18.575 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:18.575 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:18.575 "hdgst": false, 00:27:18.575 "ddgst": false 00:27:18.575 }, 00:27:18.575 "method": "bdev_nvme_attach_controller" 00:27:18.575 }' 00:27:18.575 00:11:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:27:18.575 00:11:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:27:18.575 00:11:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # break 00:27:18.575 00:11:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:27:18.575 00:11:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:18.834 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:27:18.834 ... 00:27:18.834 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:27:18.834 ... 00:27:18.834 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:27:18.834 ... 00:27:18.834 fio-3.35 00:27:18.834 Starting 24 threads 00:27:31.035 00:27:31.035 filename0: (groupid=0, jobs=1): err= 0: pid=89633: Tue Nov 19 00:11:36 2024 00:27:31.035 read: IOPS=159, BW=636KiB/s (652kB/s)(6380KiB/10024msec) 00:27:31.035 slat (usec): min=5, max=8039, avg=23.08, stdev=200.93 00:27:31.035 clat (msec): min=26, max=221, avg=100.40, stdev=30.04 00:27:31.035 lat (msec): min=26, max=221, avg=100.42, stdev=30.04 00:27:31.035 clat percentiles (msec): 00:27:31.035 | 1.00th=[ 36], 5.00th=[ 52], 10.00th=[ 69], 20.00th=[ 83], 00:27:31.035 | 30.00th=[ 85], 40.00th=[ 87], 50.00th=[ 96], 60.00th=[ 96], 00:27:31.035 | 70.00th=[ 111], 80.00th=[ 134], 90.00th=[ 144], 95.00th=[ 144], 00:27:31.035 | 99.00th=[ 199], 99.50th=[ 199], 99.90th=[ 222], 99.95th=[ 222], 00:27:31.035 | 99.99th=[ 222] 00:27:31.035 bw ( KiB/s): min= 512, max= 944, per=4.32%, avg=629.89, stdev=124.25, samples=19 00:27:31.035 iops : min= 128, max= 236, avg=157.47, stdev=31.06, samples=19 00:27:31.035 lat (msec) : 50=4.83%, 100=59.12%, 250=36.05% 00:27:31.035 cpu : usr=31.64%, sys=1.68%, ctx=845, majf=0, minf=1073 00:27:31.035 IO depths : 1=0.1%, 2=1.4%, 4=5.5%, 8=78.2%, 16=14.9%, 32=0.0%, >=64=0.0% 00:27:31.035 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:31.035 complete : 0=0.0%, 4=88.2%, 8=10.6%, 16=1.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:31.035 issued rwts: total=1595,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:31.035 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:31.035 filename0: (groupid=0, jobs=1): err= 0: pid=89634: Tue Nov 19 00:11:36 2024 00:27:31.035 read: IOPS=162, BW=650KiB/s (666kB/s)(6560KiB/10087msec) 00:27:31.035 slat (usec): min=5, max=4031, avg=18.72, stdev=99.42 00:27:31.035 clat (msec): min=14, max=183, avg=98.19, stdev=32.80 00:27:31.035 lat (msec): min=14, max=183, avg=98.21, stdev=32.80 00:27:31.035 clat percentiles (msec): 00:27:31.035 | 1.00th=[ 33], 5.00th=[ 48], 10.00th=[ 61], 20.00th=[ 71], 00:27:31.035 | 30.00th=[ 83], 40.00th=[ 85], 50.00th=[ 94], 60.00th=[ 96], 00:27:31.035 | 70.00th=[ 123], 80.00th=[ 136], 90.00th=[ 144], 95.00th=[ 144], 00:27:31.035 | 99.00th=[ 157], 99.50th=[ 157], 99.90th=[ 180], 99.95th=[ 184], 00:27:31.035 | 99.99th=[ 184] 00:27:31.035 bw ( KiB/s): min= 456, max= 896, per=4.46%, avg=649.65, stdev=162.13, samples=20 00:27:31.035 iops : min= 114, max= 224, avg=162.40, stdev=40.52, samples=20 00:27:31.035 lat (msec) : 20=0.85%, 50=5.61%, 100=55.67%, 250=37.87% 00:27:31.035 cpu : usr=34.27%, sys=2.04%, ctx=1061, majf=0, minf=1074 00:27:31.035 IO depths : 1=0.1%, 2=0.3%, 4=1.2%, 8=82.5%, 16=16.0%, 32=0.0%, >=64=0.0% 00:27:31.035 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:31.035 complete : 0=0.0%, 4=87.3%, 8=12.4%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:31.035 issued rwts: total=1640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:31.035 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:31.035 filename0: (groupid=0, jobs=1): err= 0: pid=89635: Tue Nov 19 00:11:36 2024 00:27:31.035 read: IOPS=156, BW=624KiB/s (639kB/s)(6264KiB/10036msec) 00:27:31.035 slat (usec): min=5, max=8041, avg=31.01, stdev=303.94 00:27:31.035 clat (msec): min=39, max=218, avg=102.26, stdev=28.69 00:27:31.035 lat (msec): min=39, max=218, avg=102.29, stdev=28.69 00:27:31.035 clat percentiles (msec): 00:27:31.035 | 1.00th=[ 48], 5.00th=[ 63], 10.00th=[ 74], 20.00th=[ 82], 00:27:31.035 | 30.00th=[ 85], 40.00th=[ 88], 50.00th=[ 95], 60.00th=[ 99], 00:27:31.035 | 70.00th=[ 121], 80.00th=[ 136], 90.00th=[ 144], 95.00th=[ 148], 00:27:31.035 | 99.00th=[ 201], 99.50th=[ 201], 99.90th=[ 220], 99.95th=[ 220], 00:27:31.035 | 99.99th=[ 220] 00:27:31.035 bw ( KiB/s): min= 512, max= 816, per=4.28%, avg=623.21, stdev=106.00, samples=19 00:27:31.035 iops : min= 128, max= 204, avg=155.79, stdev=26.52, samples=19 00:27:31.035 lat (msec) : 50=1.28%, 100=60.22%, 250=38.51% 00:27:31.035 cpu : usr=42.09%, sys=2.60%, ctx=1300, majf=0, minf=1072 00:27:31.035 IO depths : 1=0.1%, 2=1.9%, 4=7.6%, 8=75.7%, 16=14.7%, 32=0.0%, >=64=0.0% 00:27:31.035 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:31.035 complete : 0=0.0%, 4=88.9%, 8=9.5%, 16=1.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:31.035 issued rwts: total=1566,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:31.035 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:31.035 filename0: (groupid=0, jobs=1): err= 0: pid=89636: Tue Nov 19 00:11:36 2024 00:27:31.035 read: IOPS=142, BW=571KiB/s (585kB/s)(5716KiB/10009msec) 00:27:31.035 slat (usec): min=5, max=8039, avg=39.07, stdev=423.67 00:27:31.035 clat (msec): min=14, max=203, avg=111.84, stdev=31.64 00:27:31.035 lat (msec): min=14, max=203, avg=111.88, stdev=31.65 00:27:31.035 clat percentiles (msec): 00:27:31.035 | 1.00th=[ 29], 5.00th=[ 75], 10.00th=[ 84], 20.00th=[ 85], 00:27:31.035 | 30.00th=[ 92], 40.00th=[ 96], 50.00th=[ 97], 60.00th=[ 120], 00:27:31.035 | 70.00th=[ 136], 80.00th=[ 144], 90.00th=[ 144], 95.00th=[ 178], 00:27:31.035 | 99.00th=[ 188], 99.50th=[ 190], 99.90th=[ 205], 99.95th=[ 205], 00:27:31.035 | 99.99th=[ 205] 00:27:31.035 bw ( KiB/s): min= 384, max= 768, per=3.84%, avg=559.84, stdev=109.47, samples=19 00:27:31.035 iops : min= 96, max= 192, avg=139.95, stdev=27.37, samples=19 00:27:31.035 lat (msec) : 20=0.42%, 50=1.19%, 100=49.90%, 250=48.50% 00:27:31.035 cpu : usr=31.38%, sys=1.89%, ctx=870, majf=0, minf=1074 00:27:31.036 IO depths : 1=0.1%, 2=4.2%, 4=16.7%, 8=65.5%, 16=13.5%, 32=0.0%, >=64=0.0% 00:27:31.036 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:31.036 complete : 0=0.0%, 4=91.7%, 8=4.6%, 16=3.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:31.036 issued rwts: total=1429,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:31.036 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:31.036 filename0: (groupid=0, jobs=1): err= 0: pid=89637: Tue Nov 19 00:11:36 2024 00:27:31.036 read: IOPS=137, BW=550KiB/s (563kB/s)(5504KiB/10008msec) 00:27:31.036 slat (usec): min=4, max=4587, avg=19.32, stdev=123.41 00:27:31.036 clat (msec): min=11, max=228, avg=116.18, stdev=31.36 00:27:31.036 lat (msec): min=11, max=228, avg=116.20, stdev=31.36 00:27:31.036 clat percentiles (msec): 00:27:31.036 | 1.00th=[ 16], 5.00th=[ 79], 10.00th=[ 80], 20.00th=[ 86], 00:27:31.036 | 30.00th=[ 90], 40.00th=[ 105], 50.00th=[ 126], 60.00th=[ 136], 00:27:31.036 | 70.00th=[ 140], 80.00th=[ 142], 90.00th=[ 146], 95.00th=[ 153], 00:27:31.036 | 99.00th=[ 197], 99.50th=[ 197], 99.90th=[ 228], 99.95th=[ 228], 00:27:31.036 | 99.99th=[ 228] 00:27:31.036 bw ( KiB/s): min= 384, max= 768, per=3.70%, avg=538.95, stdev=128.23, samples=19 00:27:31.036 iops : min= 96, max= 192, avg=134.74, stdev=32.06, samples=19 00:27:31.036 lat (msec) : 20=1.16%, 50=0.15%, 100=36.19%, 250=62.50% 00:27:31.036 cpu : usr=42.67%, sys=2.67%, ctx=1609, majf=0, minf=1072 00:27:31.036 IO depths : 1=0.1%, 2=6.3%, 4=25.0%, 8=56.2%, 16=12.4%, 32=0.0%, >=64=0.0% 00:27:31.036 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:31.036 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:31.036 issued rwts: total=1376,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:31.036 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:31.036 filename0: (groupid=0, jobs=1): err= 0: pid=89638: Tue Nov 19 00:11:36 2024 00:27:31.036 read: IOPS=162, BW=649KiB/s (664kB/s)(6540KiB/10083msec) 00:27:31.036 slat (usec): min=5, max=6038, avg=31.00, stdev=244.46 00:27:31.036 clat (msec): min=26, max=183, avg=98.30, stdev=30.20 00:27:31.036 lat (msec): min=26, max=183, avg=98.33, stdev=30.21 00:27:31.036 clat percentiles (msec): 00:27:31.036 | 1.00th=[ 33], 5.00th=[ 51], 10.00th=[ 62], 20.00th=[ 74], 00:27:31.036 | 30.00th=[ 84], 40.00th=[ 87], 50.00th=[ 94], 60.00th=[ 99], 00:27:31.036 | 70.00th=[ 110], 80.00th=[ 136], 90.00th=[ 144], 95.00th=[ 146], 00:27:31.036 | 99.00th=[ 150], 99.50th=[ 153], 99.90th=[ 176], 99.95th=[ 184], 00:27:31.036 | 99.99th=[ 184] 00:27:31.036 bw ( KiB/s): min= 456, max= 896, per=4.45%, avg=647.60, stdev=143.80, samples=20 00:27:31.036 iops : min= 114, max= 224, avg=161.90, stdev=35.95, samples=20 00:27:31.036 lat (msec) : 50=5.14%, 100=57.43%, 250=37.43% 00:27:31.036 cpu : usr=41.93%, sys=2.24%, ctx=1427, majf=0, minf=1074 00:27:31.036 IO depths : 1=0.1%, 2=0.6%, 4=2.1%, 8=81.5%, 16=15.8%, 32=0.0%, >=64=0.0% 00:27:31.036 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:31.036 complete : 0=0.0%, 4=87.5%, 8=12.0%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:31.036 issued rwts: total=1635,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:31.036 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:31.036 filename0: (groupid=0, jobs=1): err= 0: pid=89639: Tue Nov 19 00:11:36 2024 00:27:31.036 read: IOPS=154, BW=616KiB/s (631kB/s)(6188KiB/10038msec) 00:27:31.036 slat (usec): min=4, max=8039, avg=34.00, stdev=352.91 00:27:31.036 clat (msec): min=36, max=206, avg=103.56, stdev=26.95 00:27:31.036 lat (msec): min=36, max=206, avg=103.60, stdev=26.94 00:27:31.036 clat percentiles (msec): 00:27:31.036 | 1.00th=[ 49], 5.00th=[ 63], 10.00th=[ 74], 20.00th=[ 84], 00:27:31.036 | 30.00th=[ 85], 40.00th=[ 93], 50.00th=[ 96], 60.00th=[ 100], 00:27:31.036 | 70.00th=[ 121], 80.00th=[ 133], 90.00th=[ 144], 95.00th=[ 144], 00:27:31.036 | 99.00th=[ 159], 99.50th=[ 159], 99.90th=[ 207], 99.95th=[ 207], 00:27:31.036 | 99.99th=[ 207] 00:27:31.036 bw ( KiB/s): min= 512, max= 816, per=4.23%, avg=615.16, stdev=96.18, samples=19 00:27:31.036 iops : min= 128, max= 204, avg=153.79, stdev=24.05, samples=19 00:27:31.036 lat (msec) : 50=1.03%, 100=59.15%, 250=39.82% 00:27:31.036 cpu : usr=31.75%, sys=1.51%, ctx=865, majf=0, minf=1075 00:27:31.036 IO depths : 1=0.1%, 2=2.1%, 4=8.5%, 8=74.8%, 16=14.5%, 32=0.0%, >=64=0.0% 00:27:31.036 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:31.036 complete : 0=0.0%, 4=89.2%, 8=9.0%, 16=1.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:31.036 issued rwts: total=1547,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:31.036 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:31.036 filename0: (groupid=0, jobs=1): err= 0: pid=89640: Tue Nov 19 00:11:36 2024 00:27:31.036 read: IOPS=132, BW=531KiB/s (544kB/s)(5312KiB/10007msec) 00:27:31.036 slat (usec): min=4, max=8041, avg=56.17, stdev=548.88 00:27:31.036 clat (msec): min=25, max=237, avg=120.20, stdev=34.04 00:27:31.036 lat (msec): min=25, max=237, avg=120.26, stdev=34.03 00:27:31.036 clat percentiles (msec): 00:27:31.036 | 1.00th=[ 40], 5.00th=[ 75], 10.00th=[ 84], 20.00th=[ 85], 00:27:31.036 | 30.00th=[ 93], 40.00th=[ 107], 50.00th=[ 122], 60.00th=[ 142], 00:27:31.036 | 70.00th=[ 144], 80.00th=[ 144], 90.00th=[ 157], 95.00th=[ 180], 00:27:31.036 | 99.00th=[ 201], 99.50th=[ 201], 99.90th=[ 239], 99.95th=[ 239], 00:27:31.036 | 99.99th=[ 239] 00:27:31.036 bw ( KiB/s): min= 384, max= 768, per=3.61%, avg=525.53, stdev=135.58, samples=19 00:27:31.036 iops : min= 96, max= 192, avg=131.37, stdev=33.90, samples=19 00:27:31.036 lat (msec) : 50=1.20%, 100=38.40%, 250=60.39% 00:27:31.036 cpu : usr=31.49%, sys=1.83%, ctx=886, majf=0, minf=1073 00:27:31.036 IO depths : 1=0.1%, 2=6.3%, 4=25.0%, 8=56.2%, 16=12.4%, 32=0.0%, >=64=0.0% 00:27:31.036 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:31.036 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:31.036 issued rwts: total=1328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:31.036 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:31.036 filename1: (groupid=0, jobs=1): err= 0: pid=89641: Tue Nov 19 00:11:36 2024 00:27:31.036 read: IOPS=161, BW=646KiB/s (662kB/s)(6504KiB/10061msec) 00:27:31.036 slat (usec): min=5, max=8041, avg=35.46, stdev=322.57 00:27:31.036 clat (msec): min=31, max=167, avg=98.66, stdev=30.06 00:27:31.036 lat (msec): min=31, max=167, avg=98.70, stdev=30.05 00:27:31.036 clat percentiles (msec): 00:27:31.036 | 1.00th=[ 33], 5.00th=[ 48], 10.00th=[ 61], 20.00th=[ 75], 00:27:31.036 | 30.00th=[ 84], 40.00th=[ 88], 50.00th=[ 95], 60.00th=[ 99], 00:27:31.036 | 70.00th=[ 116], 80.00th=[ 136], 90.00th=[ 144], 95.00th=[ 146], 00:27:31.036 | 99.00th=[ 150], 99.50th=[ 153], 99.90th=[ 157], 99.95th=[ 169], 00:27:31.036 | 99.99th=[ 169] 00:27:31.036 bw ( KiB/s): min= 512, max= 920, per=4.42%, avg=643.85, stdev=139.36, samples=20 00:27:31.036 iops : min= 128, max= 230, avg=160.95, stdev=34.84, samples=20 00:27:31.036 lat (msec) : 50=5.41%, 100=57.07%, 250=37.52% 00:27:31.036 cpu : usr=43.88%, sys=2.43%, ctx=1313, majf=0, minf=1073 00:27:31.036 IO depths : 1=0.1%, 2=0.7%, 4=2.4%, 8=81.3%, 16=15.5%, 32=0.0%, >=64=0.0% 00:27:31.036 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:31.036 complete : 0=0.0%, 4=87.4%, 8=12.0%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:31.036 issued rwts: total=1626,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:31.036 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:31.036 filename1: (groupid=0, jobs=1): err= 0: pid=89642: Tue Nov 19 00:11:36 2024 00:27:31.036 read: IOPS=170, BW=680KiB/s (696kB/s)(6884KiB/10123msec) 00:27:31.036 slat (usec): min=5, max=8031, avg=23.72, stdev=216.10 00:27:31.036 clat (msec): min=2, max=178, avg=93.88, stdev=38.89 00:27:31.036 lat (msec): min=2, max=178, avg=93.91, stdev=38.89 00:27:31.036 clat percentiles (msec): 00:27:31.036 | 1.00th=[ 3], 5.00th=[ 8], 10.00th=[ 26], 20.00th=[ 74], 00:27:31.036 | 30.00th=[ 84], 40.00th=[ 89], 50.00th=[ 94], 60.00th=[ 99], 00:27:31.036 | 70.00th=[ 116], 80.00th=[ 136], 90.00th=[ 144], 95.00th=[ 144], 00:27:31.036 | 99.00th=[ 150], 99.50th=[ 153], 99.90th=[ 178], 99.95th=[ 178], 00:27:31.036 | 99.99th=[ 178] 00:27:31.036 bw ( KiB/s): min= 488, max= 2032, per=4.68%, avg=681.35, stdev=332.72, samples=20 00:27:31.036 iops : min= 122, max= 508, avg=170.30, stdev=83.17, samples=20 00:27:31.036 lat (msec) : 4=2.67%, 10=3.83%, 20=2.67%, 50=3.20%, 100=49.91% 00:27:31.036 lat (msec) : 250=37.71% 00:27:31.036 cpu : usr=41.46%, sys=2.45%, ctx=1514, majf=0, minf=1073 00:27:31.036 IO depths : 1=0.5%, 2=2.4%, 4=8.1%, 8=74.3%, 16=14.7%, 32=0.0%, >=64=0.0% 00:27:31.036 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:31.036 complete : 0=0.0%, 4=89.5%, 8=8.8%, 16=1.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:31.036 issued rwts: total=1721,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:31.036 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:31.036 filename1: (groupid=0, jobs=1): err= 0: pid=89643: Tue Nov 19 00:11:36 2024 00:27:31.036 read: IOPS=154, BW=618KiB/s (633kB/s)(6200KiB/10036msec) 00:27:31.036 slat (usec): min=5, max=7039, avg=29.20, stdev=244.07 00:27:31.036 clat (msec): min=45, max=218, avg=103.39, stdev=28.19 00:27:31.036 lat (msec): min=45, max=218, avg=103.42, stdev=28.19 00:27:31.036 clat percentiles (msec): 00:27:31.036 | 1.00th=[ 48], 5.00th=[ 61], 10.00th=[ 74], 20.00th=[ 84], 00:27:31.036 | 30.00th=[ 86], 40.00th=[ 92], 50.00th=[ 96], 60.00th=[ 97], 00:27:31.036 | 70.00th=[ 121], 80.00th=[ 138], 90.00th=[ 144], 95.00th=[ 144], 00:27:31.036 | 99.00th=[ 205], 99.50th=[ 205], 99.90th=[ 218], 99.95th=[ 220], 00:27:31.036 | 99.99th=[ 220] 00:27:31.036 bw ( KiB/s): min= 512, max= 824, per=4.23%, avg=616.47, stdev=102.69, samples=19 00:27:31.036 iops : min= 128, max= 206, avg=154.11, stdev=25.69, samples=19 00:27:31.036 lat (msec) : 50=1.68%, 100=61.61%, 250=36.71% 00:27:31.036 cpu : usr=36.52%, sys=2.06%, ctx=1270, majf=0, minf=1073 00:27:31.036 IO depths : 1=0.1%, 2=1.7%, 4=6.9%, 8=76.5%, 16=14.8%, 32=0.0%, >=64=0.0% 00:27:31.036 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:31.036 complete : 0=0.0%, 4=88.8%, 8=9.7%, 16=1.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:31.036 issued rwts: total=1550,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:31.036 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:31.036 filename1: (groupid=0, jobs=1): err= 0: pid=89644: Tue Nov 19 00:11:36 2024 00:27:31.036 read: IOPS=149, BW=598KiB/s (612kB/s)(6016KiB/10061msec) 00:27:31.036 slat (usec): min=5, max=8043, avg=33.30, stdev=358.12 00:27:31.036 clat (msec): min=29, max=191, avg=106.68, stdev=25.89 00:27:31.036 lat (msec): min=29, max=191, avg=106.71, stdev=25.89 00:27:31.036 clat percentiles (msec): 00:27:31.036 | 1.00th=[ 31], 5.00th=[ 74], 10.00th=[ 84], 20.00th=[ 85], 00:27:31.037 | 30.00th=[ 86], 40.00th=[ 95], 50.00th=[ 96], 60.00th=[ 108], 00:27:31.037 | 70.00th=[ 131], 80.00th=[ 140], 90.00th=[ 144], 95.00th=[ 144], 00:27:31.037 | 99.00th=[ 146], 99.50th=[ 157], 99.90th=[ 167], 99.95th=[ 192], 00:27:31.037 | 99.99th=[ 192] 00:27:31.037 bw ( KiB/s): min= 488, max= 768, per=4.08%, avg=595.00, stdev=87.08, samples=20 00:27:31.037 iops : min= 122, max= 192, avg=148.75, stdev=21.77, samples=20 00:27:31.037 lat (msec) : 50=1.06%, 100=55.32%, 250=43.62% 00:27:31.037 cpu : usr=31.37%, sys=1.93%, ctx=859, majf=0, minf=1074 00:27:31.037 IO depths : 1=0.1%, 2=3.0%, 4=11.9%, 8=70.7%, 16=14.3%, 32=0.0%, >=64=0.0% 00:27:31.037 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:31.037 complete : 0=0.0%, 4=90.3%, 8=7.0%, 16=2.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:31.037 issued rwts: total=1504,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:31.037 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:31.037 filename1: (groupid=0, jobs=1): err= 0: pid=89645: Tue Nov 19 00:11:36 2024 00:27:31.037 read: IOPS=135, BW=543KiB/s (556kB/s)(5432KiB/10009msec) 00:27:31.037 slat (usec): min=5, max=4032, avg=22.31, stdev=154.06 00:27:31.037 clat (msec): min=15, max=239, avg=117.73, stdev=33.21 00:27:31.037 lat (msec): min=15, max=239, avg=117.75, stdev=33.21 00:27:31.037 clat percentiles (msec): 00:27:31.037 | 1.00th=[ 16], 5.00th=[ 75], 10.00th=[ 82], 20.00th=[ 86], 00:27:31.037 | 30.00th=[ 92], 40.00th=[ 102], 50.00th=[ 126], 60.00th=[ 136], 00:27:31.037 | 70.00th=[ 142], 80.00th=[ 144], 90.00th=[ 148], 95.00th=[ 174], 00:27:31.037 | 99.00th=[ 203], 99.50th=[ 203], 99.90th=[ 239], 99.95th=[ 241], 00:27:31.037 | 99.99th=[ 241] 00:27:31.037 bw ( KiB/s): min= 384, max= 768, per=3.66%, avg=532.00, stdev=132.17, samples=19 00:27:31.037 iops : min= 96, max= 192, avg=133.00, stdev=33.04, samples=19 00:27:31.037 lat (msec) : 20=1.03%, 50=0.15%, 100=38.59%, 250=60.24% 00:27:31.037 cpu : usr=40.24%, sys=2.37%, ctx=1211, majf=0, minf=1074 00:27:31.037 IO depths : 1=0.1%, 2=6.3%, 4=25.0%, 8=56.2%, 16=12.4%, 32=0.0%, >=64=0.0% 00:27:31.037 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:31.037 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:31.037 issued rwts: total=1358,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:31.037 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:31.037 filename1: (groupid=0, jobs=1): err= 0: pid=89646: Tue Nov 19 00:11:36 2024 00:27:31.037 read: IOPS=159, BW=638KiB/s (653kB/s)(6380KiB/10006msec) 00:27:31.037 slat (usec): min=4, max=11033, avg=28.91, stdev=303.78 00:27:31.037 clat (msec): min=6, max=227, avg=100.21, stdev=30.36 00:27:31.037 lat (msec): min=6, max=227, avg=100.24, stdev=30.37 00:27:31.037 clat percentiles (msec): 00:27:31.037 | 1.00th=[ 16], 5.00th=[ 56], 10.00th=[ 69], 20.00th=[ 82], 00:27:31.037 | 30.00th=[ 86], 40.00th=[ 88], 50.00th=[ 94], 60.00th=[ 97], 00:27:31.037 | 70.00th=[ 114], 80.00th=[ 136], 90.00th=[ 144], 95.00th=[ 144], 00:27:31.037 | 99.00th=[ 197], 99.50th=[ 197], 99.90th=[ 228], 99.95th=[ 228], 00:27:31.037 | 99.99th=[ 228] 00:27:31.037 bw ( KiB/s): min= 512, max= 824, per=4.29%, avg=624.42, stdev=109.45, samples=19 00:27:31.037 iops : min= 128, max= 206, avg=156.11, stdev=27.36, samples=19 00:27:31.037 lat (msec) : 10=0.56%, 20=0.63%, 50=2.63%, 100=60.38%, 250=35.80% 00:27:31.037 cpu : usr=43.44%, sys=2.52%, ctx=1470, majf=0, minf=1073 00:27:31.037 IO depths : 1=0.1%, 2=1.8%, 4=7.0%, 8=76.5%, 16=14.7%, 32=0.0%, >=64=0.0% 00:27:31.037 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:31.037 complete : 0=0.0%, 4=88.6%, 8=9.8%, 16=1.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:31.037 issued rwts: total=1595,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:31.037 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:31.037 filename1: (groupid=0, jobs=1): err= 0: pid=89647: Tue Nov 19 00:11:36 2024 00:27:31.037 read: IOPS=152, BW=609KiB/s (623kB/s)(6112KiB/10044msec) 00:27:31.037 slat (usec): min=5, max=8032, avg=22.44, stdev=205.13 00:27:31.037 clat (msec): min=36, max=181, avg=104.88, stdev=25.96 00:27:31.037 lat (msec): min=36, max=181, avg=104.90, stdev=25.96 00:27:31.037 clat percentiles (msec): 00:27:31.037 | 1.00th=[ 50], 5.00th=[ 72], 10.00th=[ 82], 20.00th=[ 84], 00:27:31.037 | 30.00th=[ 86], 40.00th=[ 95], 50.00th=[ 96], 60.00th=[ 107], 00:27:31.037 | 70.00th=[ 121], 80.00th=[ 134], 90.00th=[ 144], 95.00th=[ 144], 00:27:31.037 | 99.00th=[ 159], 99.50th=[ 159], 99.90th=[ 182], 99.95th=[ 182], 00:27:31.037 | 99.99th=[ 182] 00:27:31.037 bw ( KiB/s): min= 512, max= 752, per=4.17%, avg=607.60, stdev=77.71, samples=20 00:27:31.037 iops : min= 128, max= 188, avg=151.90, stdev=19.43, samples=20 00:27:31.037 lat (msec) : 50=1.51%, 100=57.26%, 250=41.23% 00:27:31.037 cpu : usr=31.65%, sys=2.10%, ctx=900, majf=0, minf=1071 00:27:31.037 IO depths : 1=0.1%, 2=2.3%, 4=9.1%, 8=74.0%, 16=14.5%, 32=0.0%, >=64=0.0% 00:27:31.037 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:31.037 complete : 0=0.0%, 4=89.4%, 8=8.6%, 16=2.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:31.037 issued rwts: total=1528,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:31.037 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:31.037 filename1: (groupid=0, jobs=1): err= 0: pid=89648: Tue Nov 19 00:11:36 2024 00:27:31.037 read: IOPS=141, BW=565KiB/s (579kB/s)(5664KiB/10023msec) 00:27:31.037 slat (usec): min=5, max=4033, avg=25.41, stdev=170.91 00:27:31.037 clat (msec): min=23, max=250, avg=113.00, stdev=33.20 00:27:31.037 lat (msec): min=23, max=250, avg=113.03, stdev=33.20 00:27:31.037 clat percentiles (msec): 00:27:31.037 | 1.00th=[ 45], 5.00th=[ 79], 10.00th=[ 81], 20.00th=[ 85], 00:27:31.037 | 30.00th=[ 88], 40.00th=[ 94], 50.00th=[ 100], 60.00th=[ 124], 00:27:31.037 | 70.00th=[ 138], 80.00th=[ 144], 90.00th=[ 148], 95.00th=[ 169], 00:27:31.037 | 99.00th=[ 226], 99.50th=[ 226], 99.90th=[ 251], 99.95th=[ 251], 00:27:31.037 | 99.99th=[ 251] 00:27:31.037 bw ( KiB/s): min= 384, max= 768, per=3.81%, avg=554.53, stdev=113.45, samples=19 00:27:31.037 iops : min= 96, max= 192, avg=138.63, stdev=28.36, samples=19 00:27:31.037 lat (msec) : 50=1.13%, 100=49.15%, 250=49.65%, 500=0.07% 00:27:31.037 cpu : usr=41.54%, sys=2.49%, ctx=1338, majf=0, minf=1071 00:27:31.037 IO depths : 1=0.1%, 2=4.5%, 4=18.0%, 8=64.1%, 16=13.3%, 32=0.0%, >=64=0.0% 00:27:31.037 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:31.037 complete : 0=0.0%, 4=92.2%, 8=3.9%, 16=4.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:31.037 issued rwts: total=1416,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:31.037 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:31.037 filename2: (groupid=0, jobs=1): err= 0: pid=89649: Tue Nov 19 00:11:36 2024 00:27:31.037 read: IOPS=158, BW=635KiB/s (650kB/s)(6368KiB/10031msec) 00:27:31.037 slat (usec): min=5, max=8034, avg=31.01, stdev=325.88 00:27:31.037 clat (msec): min=35, max=219, avg=100.64, stdev=30.11 00:27:31.037 lat (msec): min=35, max=219, avg=100.67, stdev=30.12 00:27:31.037 clat percentiles (msec): 00:27:31.037 | 1.00th=[ 45], 5.00th=[ 53], 10.00th=[ 64], 20.00th=[ 82], 00:27:31.037 | 30.00th=[ 85], 40.00th=[ 87], 50.00th=[ 96], 60.00th=[ 97], 00:27:31.037 | 70.00th=[ 117], 80.00th=[ 133], 90.00th=[ 144], 95.00th=[ 144], 00:27:31.037 | 99.00th=[ 197], 99.50th=[ 197], 99.90th=[ 220], 99.95th=[ 220], 00:27:31.037 | 99.99th=[ 220] 00:27:31.037 bw ( KiB/s): min= 496, max= 842, per=4.35%, avg=633.79, stdev=124.65, samples=19 00:27:31.037 iops : min= 124, max= 210, avg=158.42, stdev=31.12, samples=19 00:27:31.037 lat (msec) : 50=4.21%, 100=59.23%, 250=36.56% 00:27:31.037 cpu : usr=32.03%, sys=1.72%, ctx=895, majf=0, minf=1074 00:27:31.037 IO depths : 1=0.1%, 2=0.8%, 4=3.0%, 8=80.8%, 16=15.5%, 32=0.0%, >=64=0.0% 00:27:31.037 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:31.037 complete : 0=0.0%, 4=87.7%, 8=11.7%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:31.037 issued rwts: total=1592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:31.037 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:31.037 filename2: (groupid=0, jobs=1): err= 0: pid=89650: Tue Nov 19 00:11:36 2024 00:27:31.037 read: IOPS=133, BW=533KiB/s (546kB/s)(5368KiB/10062msec) 00:27:31.037 slat (usec): min=5, max=7046, avg=30.97, stdev=269.71 00:27:31.037 clat (msec): min=43, max=196, avg=119.61, stdev=32.53 00:27:31.037 lat (msec): min=43, max=196, avg=119.65, stdev=32.53 00:27:31.037 clat percentiles (msec): 00:27:31.037 | 1.00th=[ 44], 5.00th=[ 79], 10.00th=[ 81], 20.00th=[ 86], 00:27:31.037 | 30.00th=[ 91], 40.00th=[ 108], 50.00th=[ 129], 60.00th=[ 136], 00:27:31.037 | 70.00th=[ 144], 80.00th=[ 144], 90.00th=[ 148], 95.00th=[ 180], 00:27:31.037 | 99.00th=[ 192], 99.50th=[ 192], 99.90th=[ 197], 99.95th=[ 197], 00:27:31.037 | 99.99th=[ 197] 00:27:31.037 bw ( KiB/s): min= 384, max= 766, per=3.65%, avg=531.10, stdev=135.04, samples=20 00:27:31.037 iops : min= 96, max= 191, avg=132.75, stdev=33.72, samples=20 00:27:31.037 lat (msec) : 50=1.04%, 100=35.62%, 250=63.34% 00:27:31.037 cpu : usr=41.25%, sys=2.32%, ctx=1189, majf=0, minf=1074 00:27:31.037 IO depths : 1=0.1%, 2=6.3%, 4=25.0%, 8=56.2%, 16=12.4%, 32=0.0%, >=64=0.0% 00:27:31.037 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:31.037 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:31.037 issued rwts: total=1342,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:31.037 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:31.037 filename2: (groupid=0, jobs=1): err= 0: pid=89651: Tue Nov 19 00:11:36 2024 00:27:31.037 read: IOPS=154, BW=616KiB/s (631kB/s)(6176KiB/10024msec) 00:27:31.037 slat (usec): min=5, max=8037, avg=38.04, stdev=407.73 00:27:31.037 clat (msec): min=36, max=205, avg=103.57, stdev=26.72 00:27:31.037 lat (msec): min=36, max=205, avg=103.60, stdev=26.72 00:27:31.037 clat percentiles (msec): 00:27:31.037 | 1.00th=[ 48], 5.00th=[ 72], 10.00th=[ 75], 20.00th=[ 84], 00:27:31.037 | 30.00th=[ 85], 40.00th=[ 94], 50.00th=[ 96], 60.00th=[ 97], 00:27:31.037 | 70.00th=[ 121], 80.00th=[ 134], 90.00th=[ 144], 95.00th=[ 144], 00:27:31.037 | 99.00th=[ 182], 99.50th=[ 182], 99.90th=[ 207], 99.95th=[ 207], 00:27:31.037 | 99.99th=[ 207] 00:27:31.037 bw ( KiB/s): min= 512, max= 792, per=4.19%, avg=610.68, stdev=84.60, samples=19 00:27:31.037 iops : min= 128, max= 198, avg=152.63, stdev=21.15, samples=19 00:27:31.037 lat (msec) : 50=1.23%, 100=61.46%, 250=37.31% 00:27:31.037 cpu : usr=31.45%, sys=1.80%, ctx=853, majf=0, minf=1071 00:27:31.037 IO depths : 1=0.1%, 2=2.4%, 4=9.5%, 8=73.6%, 16=14.4%, 32=0.0%, >=64=0.0% 00:27:31.037 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:31.037 complete : 0=0.0%, 4=89.5%, 8=8.4%, 16=2.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:31.038 issued rwts: total=1544,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:31.038 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:31.038 filename2: (groupid=0, jobs=1): err= 0: pid=89652: Tue Nov 19 00:11:36 2024 00:27:31.038 read: IOPS=164, BW=660KiB/s (676kB/s)(6668KiB/10105msec) 00:27:31.038 slat (usec): min=4, max=8042, avg=21.90, stdev=196.65 00:27:31.038 clat (msec): min=9, max=179, avg=96.68, stdev=32.91 00:27:31.038 lat (msec): min=9, max=179, avg=96.71, stdev=32.91 00:27:31.038 clat percentiles (msec): 00:27:31.038 | 1.00th=[ 17], 5.00th=[ 46], 10.00th=[ 61], 20.00th=[ 72], 00:27:31.038 | 30.00th=[ 84], 40.00th=[ 85], 50.00th=[ 95], 60.00th=[ 96], 00:27:31.038 | 70.00th=[ 121], 80.00th=[ 134], 90.00th=[ 144], 95.00th=[ 144], 00:27:31.038 | 99.00th=[ 153], 99.50th=[ 155], 99.90th=[ 180], 99.95th=[ 180], 00:27:31.038 | 99.99th=[ 180] 00:27:31.038 bw ( KiB/s): min= 488, max= 954, per=4.53%, avg=659.90, stdev=158.08, samples=20 00:27:31.038 iops : min= 122, max= 238, avg=164.95, stdev=39.47, samples=20 00:27:31.038 lat (msec) : 10=0.96%, 20=0.84%, 50=5.82%, 100=56.57%, 250=35.81% 00:27:31.038 cpu : usr=31.83%, sys=1.87%, ctx=888, majf=0, minf=1075 00:27:31.038 IO depths : 1=0.1%, 2=0.5%, 4=1.9%, 8=81.7%, 16=15.9%, 32=0.0%, >=64=0.0% 00:27:31.038 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:31.038 complete : 0=0.0%, 4=87.5%, 8=12.1%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:31.038 issued rwts: total=1667,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:31.038 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:31.038 filename2: (groupid=0, jobs=1): err= 0: pid=89653: Tue Nov 19 00:11:36 2024 00:27:31.038 read: IOPS=149, BW=599KiB/s (613kB/s)(6012KiB/10037msec) 00:27:31.038 slat (nsec): min=4197, max=42154, avg=16340.12, stdev=6234.73 00:27:31.038 clat (msec): min=52, max=219, avg=106.64, stdev=27.22 00:27:31.038 lat (msec): min=52, max=219, avg=106.66, stdev=27.22 00:27:31.038 clat percentiles (msec): 00:27:31.038 | 1.00th=[ 72], 5.00th=[ 78], 10.00th=[ 81], 20.00th=[ 85], 00:27:31.038 | 30.00th=[ 88], 40.00th=[ 92], 50.00th=[ 96], 60.00th=[ 104], 00:27:31.038 | 70.00th=[ 128], 80.00th=[ 136], 90.00th=[ 144], 95.00th=[ 144], 00:27:31.038 | 99.00th=[ 203], 99.50th=[ 203], 99.90th=[ 220], 99.95th=[ 220], 00:27:31.038 | 99.99th=[ 220] 00:27:31.038 bw ( KiB/s): min= 488, max= 768, per=4.10%, avg=596.26, stdev=87.88, samples=19 00:27:31.038 iops : min= 122, max= 192, avg=149.05, stdev=21.98, samples=19 00:27:31.038 lat (msec) : 100=59.81%, 250=40.19% 00:27:31.038 cpu : usr=36.45%, sys=1.98%, ctx=1111, majf=0, minf=1073 00:27:31.038 IO depths : 1=0.1%, 2=3.0%, 4=11.9%, 8=70.9%, 16=14.1%, 32=0.0%, >=64=0.0% 00:27:31.038 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:31.038 complete : 0=0.0%, 4=90.2%, 8=7.2%, 16=2.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:31.038 issued rwts: total=1503,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:31.038 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:31.038 filename2: (groupid=0, jobs=1): err= 0: pid=89654: Tue Nov 19 00:11:36 2024 00:27:31.038 read: IOPS=153, BW=612KiB/s (627kB/s)(6188KiB/10103msec) 00:27:31.038 slat (usec): min=5, max=8035, avg=35.90, stdev=337.75 00:27:31.038 clat (msec): min=11, max=181, avg=104.22, stdev=27.83 00:27:31.038 lat (msec): min=11, max=181, avg=104.25, stdev=27.83 00:27:31.038 clat percentiles (msec): 00:27:31.038 | 1.00th=[ 33], 5.00th=[ 70], 10.00th=[ 81], 20.00th=[ 84], 00:27:31.038 | 30.00th=[ 86], 40.00th=[ 92], 50.00th=[ 96], 60.00th=[ 103], 00:27:31.038 | 70.00th=[ 127], 80.00th=[ 140], 90.00th=[ 144], 95.00th=[ 146], 00:27:31.038 | 99.00th=[ 153], 99.50th=[ 157], 99.90th=[ 169], 99.95th=[ 182], 00:27:31.038 | 99.99th=[ 182] 00:27:31.038 bw ( KiB/s): min= 480, max= 768, per=4.20%, avg=611.95, stdev=99.10, samples=20 00:27:31.038 iops : min= 120, max= 192, avg=152.95, stdev=24.72, samples=20 00:27:31.038 lat (msec) : 20=0.90%, 50=0.97%, 100=56.04%, 250=42.08% 00:27:31.038 cpu : usr=39.31%, sys=2.43%, ctx=1203, majf=0, minf=1073 00:27:31.038 IO depths : 1=0.1%, 2=2.3%, 4=9.2%, 8=73.6%, 16=14.7%, 32=0.0%, >=64=0.0% 00:27:31.038 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:31.038 complete : 0=0.0%, 4=89.7%, 8=8.3%, 16=2.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:31.038 issued rwts: total=1547,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:31.038 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:31.038 filename2: (groupid=0, jobs=1): err= 0: pid=89655: Tue Nov 19 00:11:36 2024 00:27:31.038 read: IOPS=160, BW=640KiB/s (656kB/s)(6432KiB/10044msec) 00:27:31.038 slat (nsec): min=7086, max=94552, avg=18635.71, stdev=7242.32 00:27:31.038 clat (msec): min=32, max=174, avg=99.69, stdev=28.63 00:27:31.038 lat (msec): min=32, max=174, avg=99.71, stdev=28.63 00:27:31.038 clat percentiles (msec): 00:27:31.038 | 1.00th=[ 44], 5.00th=[ 56], 10.00th=[ 65], 20.00th=[ 82], 00:27:31.038 | 30.00th=[ 84], 40.00th=[ 89], 50.00th=[ 93], 60.00th=[ 99], 00:27:31.038 | 70.00th=[ 109], 80.00th=[ 136], 90.00th=[ 144], 95.00th=[ 146], 00:27:31.038 | 99.00th=[ 159], 99.50th=[ 159], 99.90th=[ 176], 99.95th=[ 176], 00:27:31.038 | 99.99th=[ 176] 00:27:31.038 bw ( KiB/s): min= 512, max= 896, per=4.39%, avg=639.20, stdev=122.62, samples=20 00:27:31.038 iops : min= 128, max= 224, avg=159.80, stdev=30.66, samples=20 00:27:31.038 lat (msec) : 50=3.98%, 100=58.33%, 250=37.69% 00:27:31.038 cpu : usr=41.18%, sys=2.42%, ctx=1579, majf=0, minf=1074 00:27:31.038 IO depths : 1=0.1%, 2=1.0%, 4=3.9%, 8=79.8%, 16=15.2%, 32=0.0%, >=64=0.0% 00:27:31.038 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:31.038 complete : 0=0.0%, 4=87.8%, 8=11.3%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:31.038 issued rwts: total=1608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:31.038 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:31.038 filename2: (groupid=0, jobs=1): err= 0: pid=89656: Tue Nov 19 00:11:36 2024 00:27:31.038 read: IOPS=159, BW=636KiB/s (652kB/s)(6408KiB/10068msec) 00:27:31.038 slat (usec): min=4, max=7040, avg=22.36, stdev=175.59 00:27:31.038 clat (msec): min=9, max=177, avg=100.21, stdev=30.93 00:27:31.038 lat (msec): min=9, max=177, avg=100.23, stdev=30.93 00:27:31.038 clat percentiles (msec): 00:27:31.038 | 1.00th=[ 17], 5.00th=[ 49], 10.00th=[ 66], 20.00th=[ 81], 00:27:31.038 | 30.00th=[ 86], 40.00th=[ 91], 50.00th=[ 95], 60.00th=[ 101], 00:27:31.038 | 70.00th=[ 118], 80.00th=[ 136], 90.00th=[ 144], 95.00th=[ 146], 00:27:31.038 | 99.00th=[ 153], 99.50th=[ 153], 99.90th=[ 167], 99.95th=[ 178], 00:27:31.038 | 99.99th=[ 178] 00:27:31.038 bw ( KiB/s): min= 512, max= 1133, per=4.37%, avg=636.65, stdev=147.74, samples=20 00:27:31.038 iops : min= 128, max= 283, avg=159.15, stdev=36.89, samples=20 00:27:31.038 lat (msec) : 10=1.00%, 20=0.87%, 50=3.50%, 100=55.12%, 250=39.51% 00:27:31.038 cpu : usr=39.84%, sys=2.62%, ctx=1310, majf=0, minf=1071 00:27:31.038 IO depths : 1=0.1%, 2=2.0%, 4=8.1%, 8=75.0%, 16=14.8%, 32=0.0%, >=64=0.0% 00:27:31.038 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:31.038 complete : 0=0.0%, 4=89.2%, 8=9.0%, 16=1.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:31.038 issued rwts: total=1602,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:31.038 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:31.038 00:27:31.038 Run status group 0 (all jobs): 00:27:31.038 READ: bw=14.2MiB/s (14.9MB/s), 531KiB/s-680KiB/s (544kB/s-696kB/s), io=144MiB (151MB), run=10006-10123msec 00:27:31.038 ----------------------------------------------------- 00:27:31.038 Suppressions used: 00:27:31.038 count bytes template 00:27:31.038 45 402 /usr/src/fio/parse.c 00:27:31.038 1 8 libtcmalloc_minimal.so 00:27:31.038 1 904 libcrypto.so 00:27:31.038 ----------------------------------------------------- 00:27:31.038 00:27:31.038 00:11:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:27:31.038 00:11:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:27:31.038 00:11:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:27:31.038 00:11:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:31.038 00:11:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:27:31.038 00:11:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:31.038 00:11:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.038 00:11:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:31.038 00:11:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.038 00:11:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:31.038 00:11:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.038 00:11:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:31.038 00:11:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.038 00:11:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:27:31.038 00:11:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:27:31.038 00:11:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:27:31.038 00:11:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:31.038 00:11:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.038 00:11:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:31.038 00:11:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.038 00:11:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:27:31.038 00:11:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.038 00:11:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:31.038 00:11:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.038 00:11:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:27:31.038 00:11:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:27:31.038 00:11:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:27:31.038 00:11:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:27:31.038 00:11:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.038 00:11:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:31.038 00:11:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.038 00:11:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:27:31.038 00:11:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.039 00:11:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:31.039 00:11:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.039 00:11:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:27:31.039 00:11:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:27:31.039 00:11:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:27:31.039 00:11:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:27:31.039 00:11:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:27:31.039 00:11:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:27:31.039 00:11:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:27:31.039 00:11:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:27:31.039 00:11:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:31.039 00:11:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:27:31.039 00:11:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:27:31.039 00:11:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:27:31.039 00:11:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.039 00:11:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:31.039 bdev_null0 00:27:31.039 00:11:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.039 00:11:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:31.039 00:11:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.039 00:11:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:31.039 00:11:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.039 00:11:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:31.039 00:11:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.039 00:11:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:31.039 00:11:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.039 00:11:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:27:31.039 00:11:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.039 00:11:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:31.039 [2024-11-19 00:11:37.658791] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:27:31.039 00:11:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.039 00:11:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:31.039 00:11:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:27:31.039 00:11:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:27:31.039 00:11:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:27:31.039 00:11:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.039 00:11:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:31.039 bdev_null1 00:27:31.039 00:11:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.039 00:11:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:27:31.039 00:11:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.039 00:11:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:31.039 00:11:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.039 00:11:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:27:31.039 00:11:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.039 00:11:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:31.039 00:11:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.039 00:11:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:27:31.039 00:11:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.039 00:11:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:31.039 00:11:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.039 00:11:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:27:31.039 00:11:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:27:31.039 00:11:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:27:31.039 00:11:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:27:31.039 00:11:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:27:31.039 00:11:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:31.039 00:11:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:31.039 { 00:27:31.039 "params": { 00:27:31.039 "name": "Nvme$subsystem", 00:27:31.039 "trtype": "$TEST_TRANSPORT", 00:27:31.039 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:31.039 "adrfam": "ipv4", 00:27:31.039 "trsvcid": "$NVMF_PORT", 00:27:31.039 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:31.039 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:31.039 "hdgst": ${hdgst:-false}, 00:27:31.039 "ddgst": ${ddgst:-false} 00:27:31.039 }, 00:27:31.039 "method": "bdev_nvme_attach_controller" 00:27:31.039 } 00:27:31.039 EOF 00:27:31.039 )") 00:27:31.039 00:11:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:31.039 00:11:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:27:31.039 00:11:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:27:31.039 00:11:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:31.039 00:11:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:27:31.039 00:11:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:27:31.039 00:11:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:31.039 00:11:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:27:31.039 00:11:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:27:31.039 00:11:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:31.039 00:11:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:27:31.039 00:11:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:27:31.039 00:11:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:27:31.039 00:11:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:27:31.039 00:11:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:31.039 00:11:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:31.039 00:11:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:27:31.039 00:11:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:27:31.039 00:11:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:27:31.039 00:11:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:31.039 00:11:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:31.039 { 00:27:31.039 "params": { 00:27:31.039 "name": "Nvme$subsystem", 00:27:31.039 "trtype": "$TEST_TRANSPORT", 00:27:31.039 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:31.039 "adrfam": "ipv4", 00:27:31.039 "trsvcid": "$NVMF_PORT", 00:27:31.039 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:31.039 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:31.039 "hdgst": ${hdgst:-false}, 00:27:31.039 "ddgst": ${ddgst:-false} 00:27:31.039 }, 00:27:31.039 "method": "bdev_nvme_attach_controller" 00:27:31.039 } 00:27:31.039 EOF 00:27:31.039 )") 00:27:31.039 00:11:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:27:31.039 00:11:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:27:31.039 00:11:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:31.039 00:11:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:27:31.039 00:11:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:27:31.039 00:11:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:31.039 "params": { 00:27:31.039 "name": "Nvme0", 00:27:31.039 "trtype": "tcp", 00:27:31.039 "traddr": "10.0.0.3", 00:27:31.039 "adrfam": "ipv4", 00:27:31.039 "trsvcid": "4420", 00:27:31.039 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:31.039 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:31.039 "hdgst": false, 00:27:31.039 "ddgst": false 00:27:31.039 }, 00:27:31.039 "method": "bdev_nvme_attach_controller" 00:27:31.039 },{ 00:27:31.039 "params": { 00:27:31.039 "name": "Nvme1", 00:27:31.039 "trtype": "tcp", 00:27:31.039 "traddr": "10.0.0.3", 00:27:31.039 "adrfam": "ipv4", 00:27:31.039 "trsvcid": "4420", 00:27:31.039 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:31.039 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:31.039 "hdgst": false, 00:27:31.039 "ddgst": false 00:27:31.039 }, 00:27:31.039 "method": "bdev_nvme_attach_controller" 00:27:31.039 }' 00:27:31.299 00:11:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:27:31.299 00:11:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:27:31.299 00:11:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # break 00:27:31.299 00:11:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:27:31.299 00:11:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:31.299 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:27:31.299 ... 00:27:31.299 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:27:31.299 ... 00:27:31.299 fio-3.35 00:27:31.299 Starting 4 threads 00:27:37.869 00:27:37.869 filename0: (groupid=0, jobs=1): err= 0: pid=89790: Tue Nov 19 00:11:43 2024 00:27:37.869 read: IOPS=1692, BW=13.2MiB/s (13.9MB/s)(66.1MiB/5002msec) 00:27:37.869 slat (nsec): min=5205, max=69831, avg=16624.87, stdev=5272.72 00:27:37.869 clat (usec): min=1421, max=8449, avg=4663.71, stdev=538.07 00:27:37.869 lat (usec): min=1435, max=8471, avg=4680.34, stdev=537.96 00:27:37.869 clat percentiles (usec): 00:27:37.869 | 1.00th=[ 2606], 5.00th=[ 4293], 10.00th=[ 4359], 20.00th=[ 4424], 00:27:37.869 | 30.00th=[ 4424], 40.00th=[ 4490], 50.00th=[ 4555], 60.00th=[ 4686], 00:27:37.869 | 70.00th=[ 4817], 80.00th=[ 5080], 90.00th=[ 5342], 95.00th=[ 5538], 00:27:37.869 | 99.00th=[ 5866], 99.50th=[ 6063], 99.90th=[ 7111], 99.95th=[ 7177], 00:27:37.869 | 99.99th=[ 8455] 00:27:37.869 bw ( KiB/s): min=11904, max=14208, per=23.88%, avg=13440.00, stdev=870.49, samples=9 00:27:37.869 iops : min= 1488, max= 1776, avg=1680.00, stdev=108.81, samples=9 00:27:37.869 lat (msec) : 2=0.13%, 4=4.28%, 10=95.59% 00:27:37.869 cpu : usr=90.50%, sys=8.62%, ctx=14, majf=0, minf=1072 00:27:37.869 IO depths : 1=0.1%, 2=22.9%, 4=51.3%, 8=25.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:37.869 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:37.869 complete : 0=0.0%, 4=90.9%, 8=9.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:37.869 issued rwts: total=8464,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:37.869 latency : target=0, window=0, percentile=100.00%, depth=8 00:27:37.869 filename0: (groupid=0, jobs=1): err= 0: pid=89791: Tue Nov 19 00:11:43 2024 00:27:37.869 read: IOPS=1895, BW=14.8MiB/s (15.5MB/s)(74.1MiB/5003msec) 00:27:37.869 slat (nsec): min=5143, max=62465, avg=15199.55, stdev=5572.00 00:27:37.869 clat (usec): min=1084, max=8491, avg=4170.58, stdev=936.74 00:27:37.869 lat (usec): min=1094, max=8514, avg=4185.78, stdev=937.22 00:27:37.869 clat percentiles (usec): 00:27:37.869 | 1.00th=[ 1549], 5.00th=[ 1762], 10.00th=[ 2638], 20.00th=[ 3425], 00:27:37.869 | 30.00th=[ 4359], 40.00th=[ 4424], 50.00th=[ 4424], 60.00th=[ 4490], 00:27:37.869 | 70.00th=[ 4555], 80.00th=[ 4686], 90.00th=[ 5014], 95.00th=[ 5276], 00:27:37.869 | 99.00th=[ 5669], 99.50th=[ 5800], 99.90th=[ 8094], 99.95th=[ 8160], 00:27:37.869 | 99.99th=[ 8455] 00:27:37.869 bw ( KiB/s): min=13824, max=18016, per=27.08%, avg=15240.56, stdev=1511.68, samples=9 00:27:37.869 iops : min= 1728, max= 2252, avg=1905.00, stdev=188.98, samples=9 00:27:37.869 lat (msec) : 2=5.22%, 4=19.87%, 10=74.91% 00:27:37.869 cpu : usr=91.02%, sys=8.02%, ctx=37, majf=0, minf=1074 00:27:37.869 IO depths : 1=0.1%, 2=13.4%, 4=56.5%, 8=30.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:37.869 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:37.869 complete : 0=0.0%, 4=94.9%, 8=5.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:37.869 issued rwts: total=9483,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:37.869 latency : target=0, window=0, percentile=100.00%, depth=8 00:27:37.869 filename1: (groupid=0, jobs=1): err= 0: pid=89792: Tue Nov 19 00:11:43 2024 00:27:37.869 read: IOPS=1797, BW=14.0MiB/s (14.7MB/s)(70.3MiB/5004msec) 00:27:37.869 slat (usec): min=5, max=314, avg=16.92, stdev= 8.04 00:27:37.869 clat (usec): min=1082, max=9670, avg=4388.49, stdev=747.75 00:27:37.869 lat (usec): min=1096, max=9696, avg=4405.40, stdev=747.48 00:27:37.869 clat percentiles (usec): 00:27:37.869 | 1.00th=[ 2409], 5.00th=[ 2638], 10.00th=[ 2966], 20.00th=[ 4359], 00:27:37.869 | 30.00th=[ 4424], 40.00th=[ 4424], 50.00th=[ 4490], 60.00th=[ 4555], 00:27:37.869 | 70.00th=[ 4686], 80.00th=[ 4817], 90.00th=[ 5080], 95.00th=[ 5342], 00:27:37.869 | 99.00th=[ 5669], 99.50th=[ 5866], 99.90th=[ 8979], 99.95th=[ 9372], 00:27:37.869 | 99.99th=[ 9634] 00:27:37.870 bw ( KiB/s): min=13184, max=16368, per=25.54%, avg=14373.33, stdev=902.62, samples=9 00:27:37.870 iops : min= 1648, max= 2046, avg=1796.67, stdev=112.83, samples=9 00:27:37.870 lat (msec) : 2=0.20%, 4=15.65%, 10=84.15% 00:27:37.870 cpu : usr=90.19%, sys=8.49%, ctx=106, majf=0, minf=1075 00:27:37.870 IO depths : 1=0.1%, 2=17.7%, 4=54.2%, 8=28.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:37.870 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:37.870 complete : 0=0.0%, 4=93.1%, 8=6.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:37.870 issued rwts: total=8997,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:37.870 latency : target=0, window=0, percentile=100.00%, depth=8 00:27:37.870 filename1: (groupid=0, jobs=1): err= 0: pid=89793: Tue Nov 19 00:11:43 2024 00:27:37.870 read: IOPS=1650, BW=12.9MiB/s (13.5MB/s)(64.5MiB/5001msec) 00:27:37.870 slat (nsec): min=5342, max=63516, avg=16430.27, stdev=5575.85 00:27:37.870 clat (usec): min=1517, max=7560, avg=4779.64, stdev=487.56 00:27:37.870 lat (usec): min=1531, max=7581, avg=4796.07, stdev=487.70 00:27:37.870 clat percentiles (usec): 00:27:37.870 | 1.00th=[ 4293], 5.00th=[ 4359], 10.00th=[ 4359], 20.00th=[ 4424], 00:27:37.870 | 30.00th=[ 4490], 40.00th=[ 4490], 50.00th=[ 4555], 60.00th=[ 4752], 00:27:37.870 | 70.00th=[ 4883], 80.00th=[ 5211], 90.00th=[ 5473], 95.00th=[ 5669], 00:27:37.870 | 99.00th=[ 6325], 99.50th=[ 6915], 99.90th=[ 7439], 99.95th=[ 7439], 00:27:37.870 | 99.99th=[ 7570] 00:27:37.870 bw ( KiB/s): min=11904, max=14208, per=23.46%, avg=13200.89, stdev=926.90, samples=9 00:27:37.870 iops : min= 1488, max= 1776, avg=1650.11, stdev=115.86, samples=9 00:27:37.870 lat (msec) : 2=0.10%, 10=99.90% 00:27:37.870 cpu : usr=91.32%, sys=7.82%, ctx=11, majf=0, minf=1075 00:27:37.870 IO depths : 1=0.1%, 2=25.0%, 4=50.0%, 8=25.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:37.870 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:37.870 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:37.870 issued rwts: total=8256,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:37.870 latency : target=0, window=0, percentile=100.00%, depth=8 00:27:37.870 00:27:37.870 Run status group 0 (all jobs): 00:27:37.870 READ: bw=55.0MiB/s (57.6MB/s), 12.9MiB/s-14.8MiB/s (13.5MB/s-15.5MB/s), io=275MiB (288MB), run=5001-5004msec 00:27:38.439 ----------------------------------------------------- 00:27:38.439 Suppressions used: 00:27:38.439 count bytes template 00:27:38.439 6 52 /usr/src/fio/parse.c 00:27:38.439 1 8 libtcmalloc_minimal.so 00:27:38.439 1 904 libcrypto.so 00:27:38.439 ----------------------------------------------------- 00:27:38.439 00:27:38.439 00:11:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:27:38.439 00:11:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:27:38.439 00:11:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:27:38.439 00:11:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:38.439 00:11:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:27:38.439 00:11:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:38.439 00:11:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.439 00:11:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:38.439 00:11:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.439 00:11:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:38.439 00:11:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.439 00:11:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:38.439 00:11:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.439 00:11:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:27:38.439 00:11:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:27:38.439 00:11:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:27:38.439 00:11:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:38.439 00:11:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.439 00:11:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:38.439 00:11:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.439 00:11:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:27:38.439 00:11:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.439 00:11:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:38.439 00:11:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.439 00:27:38.439 real 0m27.141s 00:27:38.439 user 2m7.852s 00:27:38.439 sys 0m9.062s 00:27:38.439 00:11:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:38.439 00:11:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:38.439 ************************************ 00:27:38.439 END TEST fio_dif_rand_params 00:27:38.439 ************************************ 00:27:38.439 00:11:44 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:27:38.439 00:11:44 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:38.439 00:11:44 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:38.439 00:11:44 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:38.439 ************************************ 00:27:38.439 START TEST fio_dif_digest 00:27:38.439 ************************************ 00:27:38.439 00:11:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:27:38.439 00:11:44 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:27:38.439 00:11:44 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:27:38.439 00:11:44 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:27:38.439 00:11:44 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:27:38.439 00:11:44 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:27:38.439 00:11:44 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:27:38.439 00:11:44 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:27:38.439 00:11:44 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:27:38.439 00:11:44 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:27:38.439 00:11:44 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:27:38.439 00:11:44 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:27:38.439 00:11:44 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:27:38.439 00:11:44 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:27:38.439 00:11:44 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:27:38.439 00:11:44 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:27:38.439 00:11:44 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:27:38.439 00:11:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.439 00:11:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:27:38.439 bdev_null0 00:27:38.439 00:11:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.439 00:11:44 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:38.439 00:11:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.439 00:11:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:27:38.439 00:11:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.439 00:11:45 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:38.439 00:11:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.439 00:11:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:27:38.439 00:11:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.439 00:11:45 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:27:38.439 00:11:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.439 00:11:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:27:38.439 [2024-11-19 00:11:45.012573] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:27:38.439 00:11:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.439 00:11:45 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:27:38.439 00:11:45 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:27:38.439 00:11:45 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:27:38.439 00:11:45 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:38.439 00:11:45 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:27:38.439 00:11:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:38.439 00:11:45 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:27:38.439 00:11:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:27:38.440 00:11:45 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:27:38.440 00:11:45 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:38.440 00:11:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:38.440 00:11:45 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:27:38.440 00:11:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:27:38.440 00:11:45 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:38.440 { 00:27:38.440 "params": { 00:27:38.440 "name": "Nvme$subsystem", 00:27:38.440 "trtype": "$TEST_TRANSPORT", 00:27:38.440 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:38.440 "adrfam": "ipv4", 00:27:38.440 "trsvcid": "$NVMF_PORT", 00:27:38.440 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:38.440 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:38.440 "hdgst": ${hdgst:-false}, 00:27:38.440 "ddgst": ${ddgst:-false} 00:27:38.440 }, 00:27:38.440 "method": "bdev_nvme_attach_controller" 00:27:38.440 } 00:27:38.440 EOF 00:27:38.440 )") 00:27:38.440 00:11:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:38.440 00:11:45 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:27:38.440 00:11:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:27:38.440 00:11:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:27:38.440 00:11:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:27:38.440 00:11:45 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:27:38.440 00:11:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:38.440 00:11:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:27:38.440 00:11:45 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:27:38.440 00:11:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:27:38.440 00:11:45 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:27:38.440 00:11:45 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:27:38.440 00:11:45 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:27:38.440 00:11:45 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:38.440 "params": { 00:27:38.440 "name": "Nvme0", 00:27:38.440 "trtype": "tcp", 00:27:38.440 "traddr": "10.0.0.3", 00:27:38.440 "adrfam": "ipv4", 00:27:38.440 "trsvcid": "4420", 00:27:38.440 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:38.440 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:38.440 "hdgst": true, 00:27:38.440 "ddgst": true 00:27:38.440 }, 00:27:38.440 "method": "bdev_nvme_attach_controller" 00:27:38.440 }' 00:27:38.440 00:11:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:27:38.440 00:11:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:27:38.440 00:11:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1351 -- # break 00:27:38.440 00:11:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:27:38.440 00:11:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:38.700 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:27:38.700 ... 00:27:38.700 fio-3.35 00:27:38.700 Starting 3 threads 00:27:50.913 00:27:50.913 filename0: (groupid=0, jobs=1): err= 0: pid=89903: Tue Nov 19 00:11:56 2024 00:27:50.913 read: IOPS=205, BW=25.6MiB/s (26.9MB/s)(257MiB/10015msec) 00:27:50.913 slat (nsec): min=8394, max=74047, avg=19002.53, stdev=6524.38 00:27:50.913 clat (usec): min=13956, max=18933, avg=14576.56, stdev=505.28 00:27:50.913 lat (usec): min=13971, max=18973, avg=14595.56, stdev=505.80 00:27:50.913 clat percentiles (usec): 00:27:50.913 | 1.00th=[14091], 5.00th=[14091], 10.00th=[14222], 20.00th=[14222], 00:27:50.913 | 30.00th=[14353], 40.00th=[14353], 50.00th=[14353], 60.00th=[14484], 00:27:50.913 | 70.00th=[14615], 80.00th=[14877], 90.00th=[15139], 95.00th=[15664], 00:27:50.913 | 99.00th=[16319], 99.50th=[16581], 99.90th=[19006], 99.95th=[19006], 00:27:50.913 | 99.99th=[19006] 00:27:50.913 bw ( KiB/s): min=25344, max=26880, per=33.33%, avg=26265.60, stdev=472.77, samples=20 00:27:50.913 iops : min= 198, max= 210, avg=205.20, stdev= 3.69, samples=20 00:27:50.913 lat (msec) : 20=100.00% 00:27:50.913 cpu : usr=91.93%, sys=7.47%, ctx=11, majf=0, minf=1074 00:27:50.913 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:50.913 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.913 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.913 issued rwts: total=2055,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.913 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:50.913 filename0: (groupid=0, jobs=1): err= 0: pid=89904: Tue Nov 19 00:11:56 2024 00:27:50.913 read: IOPS=205, BW=25.7MiB/s (26.9MB/s)(257MiB/10012msec) 00:27:50.913 slat (nsec): min=5378, max=73683, avg=19671.51, stdev=6760.71 00:27:50.913 clat (usec): min=13942, max=16916, avg=14570.33, stdev=482.51 00:27:50.913 lat (usec): min=13956, max=16941, avg=14590.00, stdev=483.04 00:27:50.913 clat percentiles (usec): 00:27:50.913 | 1.00th=[14091], 5.00th=[14091], 10.00th=[14222], 20.00th=[14222], 00:27:50.913 | 30.00th=[14353], 40.00th=[14353], 50.00th=[14353], 60.00th=[14484], 00:27:50.913 | 70.00th=[14615], 80.00th=[14877], 90.00th=[15139], 95.00th=[15533], 00:27:50.913 | 99.00th=[16319], 99.50th=[16450], 99.90th=[16909], 99.95th=[16909], 00:27:50.913 | 99.99th=[16909] 00:27:50.913 bw ( KiB/s): min=25344, max=26880, per=33.33%, avg=26262.95, stdev=403.02, samples=20 00:27:50.913 iops : min= 198, max= 210, avg=205.15, stdev= 3.17, samples=20 00:27:50.913 lat (msec) : 20=100.00% 00:27:50.913 cpu : usr=91.43%, sys=7.60%, ctx=80, majf=0, minf=1074 00:27:50.913 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:50.913 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.913 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.913 issued rwts: total=2055,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.913 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:50.913 filename0: (groupid=0, jobs=1): err= 0: pid=89905: Tue Nov 19 00:11:56 2024 00:27:50.913 read: IOPS=205, BW=25.7MiB/s (26.9MB/s)(257MiB/10011msec) 00:27:50.913 slat (nsec): min=5415, max=75797, avg=19734.86, stdev=6654.83 00:27:50.913 clat (usec): min=13953, max=16921, avg=14568.78, stdev=478.85 00:27:50.913 lat (usec): min=13967, max=16946, avg=14588.52, stdev=479.16 00:27:50.913 clat percentiles (usec): 00:27:50.913 | 1.00th=[14091], 5.00th=[14091], 10.00th=[14222], 20.00th=[14222], 00:27:50.913 | 30.00th=[14353], 40.00th=[14353], 50.00th=[14353], 60.00th=[14484], 00:27:50.913 | 70.00th=[14615], 80.00th=[14877], 90.00th=[15139], 95.00th=[15664], 00:27:50.913 | 99.00th=[16319], 99.50th=[16319], 99.90th=[16909], 99.95th=[16909], 00:27:50.913 | 99.99th=[16909] 00:27:50.913 bw ( KiB/s): min=25344, max=26880, per=33.33%, avg=26265.60, stdev=401.78, samples=20 00:27:50.913 iops : min= 198, max= 210, avg=205.20, stdev= 3.14, samples=20 00:27:50.913 lat (msec) : 20=100.00% 00:27:50.913 cpu : usr=92.73%, sys=6.63%, ctx=14, majf=0, minf=1072 00:27:50.913 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:50.913 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.913 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.913 issued rwts: total=2055,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.913 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:50.913 00:27:50.913 Run status group 0 (all jobs): 00:27:50.913 READ: bw=76.9MiB/s (80.7MB/s), 25.6MiB/s-25.7MiB/s (26.9MB/s-26.9MB/s), io=771MiB (808MB), run=10011-10015msec 00:27:50.913 ----------------------------------------------------- 00:27:50.913 Suppressions used: 00:27:50.913 count bytes template 00:27:50.913 5 44 /usr/src/fio/parse.c 00:27:50.913 1 8 libtcmalloc_minimal.so 00:27:50.913 1 904 libcrypto.so 00:27:50.913 ----------------------------------------------------- 00:27:50.913 00:27:50.913 00:11:57 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:27:50.913 00:11:57 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:27:50.913 00:11:57 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:27:50.914 00:11:57 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:50.914 00:11:57 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:27:50.914 00:11:57 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:50.914 00:11:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.914 00:11:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:27:50.914 00:11:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.914 00:11:57 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:50.914 00:11:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.914 00:11:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:27:50.914 00:11:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.914 00:27:50.914 real 0m12.240s 00:27:50.914 user 0m29.512s 00:27:50.914 sys 0m2.499s 00:27:50.914 00:11:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:50.914 00:11:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:27:50.914 ************************************ 00:27:50.914 END TEST fio_dif_digest 00:27:50.914 ************************************ 00:27:50.914 00:11:57 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:27:50.914 00:11:57 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:27:50.914 00:11:57 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:50.914 00:11:57 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:27:50.914 00:11:57 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:50.914 00:11:57 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:27:50.914 00:11:57 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:50.914 00:11:57 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:50.914 rmmod nvme_tcp 00:27:50.914 rmmod nvme_fabrics 00:27:50.914 rmmod nvme_keyring 00:27:50.914 00:11:57 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:50.914 00:11:57 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:27:50.914 00:11:57 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:27:50.914 00:11:57 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 89148 ']' 00:27:50.914 00:11:57 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 89148 00:27:50.914 00:11:57 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 89148 ']' 00:27:50.914 00:11:57 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 89148 00:27:50.914 00:11:57 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:27:50.914 00:11:57 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:50.914 00:11:57 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89148 00:27:50.914 00:11:57 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:50.914 00:11:57 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:50.914 killing process with pid 89148 00:27:50.914 00:11:57 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89148' 00:27:50.914 00:11:57 nvmf_dif -- common/autotest_common.sh@973 -- # kill 89148 00:27:50.914 00:11:57 nvmf_dif -- common/autotest_common.sh@978 -- # wait 89148 00:27:51.851 00:11:58 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:27:51.851 00:11:58 nvmf_dif -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:27:51.851 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:51.851 Waiting for block devices as requested 00:27:52.111 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:27:52.111 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:27:52.111 00:11:58 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:52.111 00:11:58 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:52.111 00:11:58 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:27:52.111 00:11:58 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:27:52.111 00:11:58 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:52.111 00:11:58 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:27:52.111 00:11:58 nvmf_dif -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:52.111 00:11:58 nvmf_dif -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:27:52.111 00:11:58 nvmf_dif -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:27:52.111 00:11:58 nvmf_dif -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:27:52.111 00:11:58 nvmf_dif -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:27:52.370 00:11:58 nvmf_dif -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:27:52.370 00:11:58 nvmf_dif -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:27:52.370 00:11:58 nvmf_dif -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:27:52.370 00:11:58 nvmf_dif -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:27:52.370 00:11:58 nvmf_dif -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:27:52.370 00:11:58 nvmf_dif -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:27:52.370 00:11:58 nvmf_dif -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:27:52.370 00:11:58 nvmf_dif -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:27:52.370 00:11:58 nvmf_dif -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:52.370 00:11:58 nvmf_dif -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:52.370 00:11:58 nvmf_dif -- nvmf/common.sh@246 -- # remove_spdk_ns 00:27:52.370 00:11:58 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:52.370 00:11:58 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:52.370 00:11:58 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:52.370 00:11:58 nvmf_dif -- nvmf/common.sh@300 -- # return 0 00:27:52.370 00:27:52.370 real 1m8.115s 00:27:52.370 user 4m4.033s 00:27:52.370 sys 0m20.055s 00:27:52.370 00:11:58 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:52.370 00:11:58 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:52.370 ************************************ 00:27:52.370 END TEST nvmf_dif 00:27:52.370 ************************************ 00:27:52.370 00:11:59 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:27:52.370 00:11:59 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:52.370 00:11:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:52.370 00:11:59 -- common/autotest_common.sh@10 -- # set +x 00:27:52.370 ************************************ 00:27:52.370 START TEST nvmf_abort_qd_sizes 00:27:52.370 ************************************ 00:27:52.370 00:11:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:27:52.630 * Looking for test storage... 00:27:52.630 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:27:52.630 00:11:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:52.630 00:11:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lcov --version 00:27:52.630 00:11:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:52.630 00:11:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:52.630 00:11:59 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:52.630 00:11:59 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:52.630 00:11:59 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:52.630 00:11:59 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:27:52.630 00:11:59 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:27:52.630 00:11:59 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:27:52.630 00:11:59 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:27:52.630 00:11:59 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:27:52.630 00:11:59 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:27:52.630 00:11:59 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:27:52.630 00:11:59 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:52.630 00:11:59 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:27:52.630 00:11:59 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:27:52.630 00:11:59 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:52.630 00:11:59 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:52.630 00:11:59 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:27:52.630 00:11:59 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:27:52.630 00:11:59 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:52.630 00:11:59 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:27:52.630 00:11:59 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:27:52.630 00:11:59 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:27:52.630 00:11:59 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:27:52.630 00:11:59 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:52.630 00:11:59 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:27:52.630 00:11:59 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:27:52.630 00:11:59 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:52.630 00:11:59 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:52.630 00:11:59 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:27:52.630 00:11:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:52.630 00:11:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:52.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:52.630 --rc genhtml_branch_coverage=1 00:27:52.630 --rc genhtml_function_coverage=1 00:27:52.630 --rc genhtml_legend=1 00:27:52.630 --rc geninfo_all_blocks=1 00:27:52.630 --rc geninfo_unexecuted_blocks=1 00:27:52.630 00:27:52.630 ' 00:27:52.630 00:11:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:52.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:52.630 --rc genhtml_branch_coverage=1 00:27:52.630 --rc genhtml_function_coverage=1 00:27:52.630 --rc genhtml_legend=1 00:27:52.630 --rc geninfo_all_blocks=1 00:27:52.630 --rc geninfo_unexecuted_blocks=1 00:27:52.630 00:27:52.630 ' 00:27:52.630 00:11:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:52.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:52.630 --rc genhtml_branch_coverage=1 00:27:52.630 --rc genhtml_function_coverage=1 00:27:52.630 --rc genhtml_legend=1 00:27:52.630 --rc geninfo_all_blocks=1 00:27:52.630 --rc geninfo_unexecuted_blocks=1 00:27:52.630 00:27:52.630 ' 00:27:52.630 00:11:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:52.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:52.630 --rc genhtml_branch_coverage=1 00:27:52.630 --rc genhtml_function_coverage=1 00:27:52.630 --rc genhtml_legend=1 00:27:52.630 --rc geninfo_all_blocks=1 00:27:52.630 --rc geninfo_unexecuted_blocks=1 00:27:52.630 00:27:52.630 ' 00:27:52.630 00:11:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:52.630 00:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:27:52.630 00:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:52.630 00:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:52.630 00:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:52.630 00:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:52.630 00:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:52.630 00:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:52.630 00:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:52.630 00:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:52.630 00:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:52.630 00:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:52.630 00:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:27:52.630 00:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:27:52.630 00:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:52.630 00:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:52.630 00:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:52.630 00:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:52.630 00:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:52.630 00:11:59 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:27:52.630 00:11:59 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:52.630 00:11:59 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:52.630 00:11:59 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:52.630 00:11:59 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:52.630 00:11:59 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:52.630 00:11:59 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:52.630 00:11:59 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:27:52.630 00:11:59 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:52.630 00:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:27:52.630 00:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:52.630 00:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:52.631 00:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:52.631 00:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:52.631 00:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:52.631 00:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:52.631 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:52.631 00:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:52.631 00:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:52.631 00:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:52.631 00:11:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:27:52.631 00:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:52.631 00:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:52.631 00:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:52.631 00:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:52.631 00:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:52.631 00:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:52.631 00:11:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:52.631 00:11:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:52.631 00:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:27:52.631 00:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:27:52.631 00:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:27:52.631 00:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:27:52.631 00:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:27:52.631 00:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@460 -- # nvmf_veth_init 00:27:52.631 00:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:52.631 00:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:27:52.631 00:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:27:52.631 00:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:27:52.631 00:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:52.631 00:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:27:52.631 00:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:52.631 00:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:27:52.631 00:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:52.631 00:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:27:52.631 00:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:52.631 00:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:52.631 00:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:52.631 00:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:52.631 00:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:52.631 00:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:52.631 00:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:27:52.631 Cannot find device "nvmf_init_br" 00:27:52.631 00:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:27:52.631 00:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:27:52.631 Cannot find device "nvmf_init_br2" 00:27:52.631 00:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:27:52.631 00:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:27:52.631 Cannot find device "nvmf_tgt_br" 00:27:52.631 00:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # true 00:27:52.631 00:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:27:52.631 Cannot find device "nvmf_tgt_br2" 00:27:52.631 00:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # true 00:27:52.631 00:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:27:52.631 Cannot find device "nvmf_init_br" 00:27:52.890 00:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # true 00:27:52.890 00:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:27:52.890 Cannot find device "nvmf_init_br2" 00:27:52.890 00:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # true 00:27:52.890 00:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:27:52.890 Cannot find device "nvmf_tgt_br" 00:27:52.890 00:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # true 00:27:52.890 00:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:27:52.890 Cannot find device "nvmf_tgt_br2" 00:27:52.890 00:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # true 00:27:52.890 00:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:27:52.890 Cannot find device "nvmf_br" 00:27:52.890 00:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # true 00:27:52.890 00:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:27:52.890 Cannot find device "nvmf_init_if" 00:27:52.890 00:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # true 00:27:52.890 00:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:27:52.890 Cannot find device "nvmf_init_if2" 00:27:52.890 00:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # true 00:27:52.890 00:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:52.890 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:52.890 00:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # true 00:27:52.890 00:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:52.890 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:52.890 00:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # true 00:27:52.890 00:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:27:52.890 00:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:52.890 00:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:27:52.890 00:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:52.890 00:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:52.890 00:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:52.890 00:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:52.890 00:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:52.890 00:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:27:52.890 00:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:27:52.890 00:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:27:52.890 00:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:27:52.890 00:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:27:52.890 00:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:27:52.890 00:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:27:52.890 00:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:27:52.890 00:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:27:52.890 00:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:52.890 00:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:52.890 00:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:52.890 00:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:27:52.890 00:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:27:52.890 00:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:27:52.890 00:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:27:53.150 00:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:53.150 00:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:53.150 00:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:53.150 00:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:27:53.150 00:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:27:53.150 00:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:27:53.150 00:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:53.150 00:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:27:53.150 00:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:27:53.150 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:53.150 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.083 ms 00:27:53.150 00:27:53.150 --- 10.0.0.3 ping statistics --- 00:27:53.150 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:53.150 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:27:53.150 00:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:27:53.150 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:27:53.150 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.058 ms 00:27:53.150 00:27:53.150 --- 10.0.0.4 ping statistics --- 00:27:53.150 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:53.150 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:27:53.150 00:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:53.150 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:53.150 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:27:53.150 00:27:53.150 --- 10.0.0.1 ping statistics --- 00:27:53.150 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:53.150 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:27:53.150 00:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:27:53.150 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:53.150 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.047 ms 00:27:53.150 00:27:53.151 --- 10.0.0.2 ping statistics --- 00:27:53.151 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:53.151 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:27:53.151 00:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:53.151 00:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@461 -- # return 0 00:27:53.151 00:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:27:53.151 00:11:59 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:27:53.719 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:53.978 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:27:53.978 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:27:53.978 00:12:00 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:53.978 00:12:00 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:53.978 00:12:00 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:53.978 00:12:00 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:53.978 00:12:00 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:53.978 00:12:00 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:53.978 00:12:00 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:27:53.978 00:12:00 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:53.978 00:12:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:53.978 00:12:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:53.978 00:12:00 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=90567 00:27:53.978 00:12:00 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:27:53.978 00:12:00 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 90567 00:27:53.978 00:12:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 90567 ']' 00:27:53.978 00:12:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:53.978 00:12:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:53.978 00:12:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:53.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:53.978 00:12:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:53.978 00:12:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:54.237 [2024-11-19 00:12:00.667409] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:27:54.237 [2024-11-19 00:12:00.667898] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:54.237 [2024-11-19 00:12:00.858843] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:54.497 [2024-11-19 00:12:00.990813] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:54.497 [2024-11-19 00:12:00.991052] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:54.497 [2024-11-19 00:12:00.991239] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:54.497 [2024-11-19 00:12:00.991407] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:54.497 [2024-11-19 00:12:00.991476] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:54.497 [2024-11-19 00:12:00.993865] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:54.497 [2024-11-19 00:12:00.994013] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:54.497 [2024-11-19 00:12:00.994100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:54.497 [2024-11-19 00:12:00.994119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:54.756 [2024-11-19 00:12:01.212012] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:27:55.014 00:12:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:55.014 00:12:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:27:55.014 00:12:01 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:55.014 00:12:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:55.014 00:12:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:55.273 00:12:01 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:55.273 00:12:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:27:55.273 00:12:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:27:55.273 00:12:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:27:55.273 00:12:01 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:27:55.273 00:12:01 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:27:55.273 00:12:01 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n '' ]] 00:27:55.273 00:12:01 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:27:55.273 00:12:01 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:27:55.273 00:12:01 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # local bdf= 00:27:55.273 00:12:01 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:27:55.273 00:12:01 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # local class 00:27:55.273 00:12:01 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # local subclass 00:27:55.273 00:12:01 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # local progif 00:27:55.273 00:12:01 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # printf %02x 1 00:27:55.273 00:12:01 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # class=01 00:27:55.273 00:12:01 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # printf %02x 8 00:27:55.273 00:12:01 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # subclass=08 00:27:55.273 00:12:01 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # printf %02x 2 00:27:55.273 00:12:01 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # progif=02 00:27:55.273 00:12:01 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # hash lspci 00:27:55.273 00:12:01 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:27:55.273 00:12:01 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # lspci -mm -n -D 00:27:55.273 00:12:01 nvmf_abort_qd_sizes -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:27:55.273 00:12:01 nvmf_abort_qd_sizes -- scripts/common.sh@243 -- # grep -i -- -p02 00:27:55.273 00:12:01 nvmf_abort_qd_sizes -- scripts/common.sh@245 -- # tr -d '"' 00:27:55.273 00:12:01 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:27:55.273 00:12:01 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:27:55.273 00:12:01 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:27:55.273 00:12:01 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:27:55.273 00:12:01 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:27:55.273 00:12:01 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:27:55.273 00:12:01 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:27:55.273 00:12:01 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:27:55.273 00:12:01 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:27:55.273 00:12:01 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:27:55.273 00:12:01 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:27:55.273 00:12:01 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:27:55.273 00:12:01 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:27:55.273 00:12:01 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:27:55.273 00:12:01 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:27:55.273 00:12:01 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:27:55.273 00:12:01 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:27:55.273 00:12:01 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:27:55.273 00:12:01 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:27:55.273 00:12:01 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:27:55.273 00:12:01 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:27:55.273 00:12:01 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:27:55.273 00:12:01 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:27:55.273 00:12:01 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:27:55.273 00:12:01 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 2 )) 00:27:55.273 00:12:01 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:27:55.273 00:12:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:27:55.273 00:12:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:27:55.273 00:12:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:27:55.273 00:12:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:55.273 00:12:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:55.273 00:12:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:55.273 ************************************ 00:27:55.273 START TEST spdk_target_abort 00:27:55.273 ************************************ 00:27:55.273 00:12:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:27:55.273 00:12:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:27:55.273 00:12:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:27:55.273 00:12:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.273 00:12:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:55.273 spdk_targetn1 00:27:55.273 00:12:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.273 00:12:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:55.273 00:12:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.273 00:12:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:55.273 [2024-11-19 00:12:01.837904] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:55.273 00:12:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.273 00:12:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:27:55.273 00:12:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.273 00:12:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:55.273 00:12:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.273 00:12:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:27:55.273 00:12:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.273 00:12:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:55.273 00:12:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.273 00:12:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.3 -s 4420 00:27:55.273 00:12:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.273 00:12:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:55.273 [2024-11-19 00:12:01.884330] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:27:55.273 00:12:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.273 00:12:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.3 4420 nqn.2016-06.io.spdk:testnqn 00:27:55.273 00:12:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:27:55.273 00:12:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:27:55.273 00:12:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.3 00:27:55.273 00:12:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:27:55.273 00:12:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:27:55.273 00:12:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:27:55.273 00:12:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:27:55.273 00:12:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:27:55.273 00:12:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:55.273 00:12:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:27:55.273 00:12:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:55.273 00:12:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:27:55.273 00:12:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:55.274 00:12:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3' 00:27:55.274 00:12:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:55.274 00:12:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:27:55.274 00:12:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:55.274 00:12:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:55.274 00:12:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:55.274 00:12:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:58.611 Initializing NVMe Controllers 00:27:58.611 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:27:58.611 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:58.611 Initialization complete. Launching workers. 00:27:58.611 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8632, failed: 0 00:27:58.611 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1044, failed to submit 7588 00:27:58.611 success 829, unsuccessful 215, failed 0 00:27:58.611 00:12:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:58.611 00:12:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:02.800 Initializing NVMe Controllers 00:28:02.800 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:28:02.800 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:02.800 Initialization complete. Launching workers. 00:28:02.800 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8904, failed: 0 00:28:02.800 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1164, failed to submit 7740 00:28:02.800 success 373, unsuccessful 791, failed 0 00:28:02.800 00:12:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:02.800 00:12:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:06.088 Initializing NVMe Controllers 00:28:06.088 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:28:06.088 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:06.088 Initialization complete. Launching workers. 00:28:06.088 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 27694, failed: 0 00:28:06.088 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2245, failed to submit 25449 00:28:06.088 success 359, unsuccessful 1886, failed 0 00:28:06.088 00:12:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:28:06.088 00:12:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.088 00:12:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:06.088 00:12:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.088 00:12:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:28:06.088 00:12:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.088 00:12:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:06.088 00:12:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.088 00:12:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 90567 00:28:06.088 00:12:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 90567 ']' 00:28:06.088 00:12:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 90567 00:28:06.088 00:12:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:28:06.088 00:12:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:06.089 00:12:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90567 00:28:06.089 killing process with pid 90567 00:28:06.089 00:12:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:06.089 00:12:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:06.089 00:12:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90567' 00:28:06.089 00:12:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 90567 00:28:06.089 00:12:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 90567 00:28:06.658 ************************************ 00:28:06.658 END TEST spdk_target_abort 00:28:06.658 ************************************ 00:28:06.658 00:28:06.658 real 0m11.353s 00:28:06.658 user 0m45.634s 00:28:06.658 sys 0m2.228s 00:28:06.658 00:12:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:06.658 00:12:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:06.658 00:12:13 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:28:06.658 00:12:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:06.658 00:12:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:06.658 00:12:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:06.658 ************************************ 00:28:06.658 START TEST kernel_target_abort 00:28:06.658 ************************************ 00:28:06.658 00:12:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:28:06.658 00:12:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:28:06.658 00:12:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:28:06.658 00:12:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:06.658 00:12:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:06.658 00:12:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:06.658 00:12:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:06.658 00:12:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:06.658 00:12:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:06.658 00:12:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:06.658 00:12:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:06.658 00:12:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:06.658 00:12:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:28:06.658 00:12:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:28:06.658 00:12:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:28:06.658 00:12:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:06.658 00:12:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:06.658 00:12:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:28:06.658 00:12:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:28:06.658 00:12:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:28:06.658 00:12:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:28:06.658 00:12:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:28:06.658 00:12:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:28:06.918 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:06.918 Waiting for block devices as requested 00:28:06.918 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:28:07.178 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:28:07.437 00:12:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:28:07.437 00:12:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:28:07.437 00:12:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:28:07.437 00:12:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:28:07.437 00:12:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:28:07.437 00:12:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:28:07.437 00:12:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:28:07.437 00:12:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:28:07.437 00:12:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:28:07.437 No valid GPT data, bailing 00:28:07.437 00:12:14 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:28:07.437 00:12:14 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:28:07.437 00:12:14 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:28:07.437 00:12:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:28:07.437 00:12:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:28:07.437 00:12:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:28:07.437 00:12:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:28:07.437 00:12:14 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:28:07.438 00:12:14 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:28:07.438 00:12:14 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:28:07.438 00:12:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:28:07.438 00:12:14 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:28:07.438 00:12:14 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:28:07.438 No valid GPT data, bailing 00:28:07.438 00:12:14 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:28:07.697 00:12:14 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:28:07.697 00:12:14 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:28:07.697 00:12:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:28:07.697 00:12:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:28:07.697 00:12:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:28:07.697 00:12:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:28:07.697 00:12:14 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:28:07.697 00:12:14 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:28:07.697 00:12:14 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:28:07.697 00:12:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:28:07.697 00:12:14 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:28:07.697 00:12:14 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:28:07.697 No valid GPT data, bailing 00:28:07.697 00:12:14 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:28:07.697 00:12:14 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:28:07.697 00:12:14 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:28:07.697 00:12:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:28:07.697 00:12:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:28:07.697 00:12:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:28:07.697 00:12:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:28:07.697 00:12:14 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:28:07.697 00:12:14 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:28:07.697 00:12:14 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:28:07.697 00:12:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:28:07.697 00:12:14 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:28:07.697 00:12:14 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:28:07.697 No valid GPT data, bailing 00:28:07.697 00:12:14 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:28:07.697 00:12:14 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:28:07.697 00:12:14 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:28:07.697 00:12:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:28:07.697 00:12:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:28:07.697 00:12:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:07.697 00:12:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:07.697 00:12:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:28:07.697 00:12:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:28:07.697 00:12:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:28:07.697 00:12:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:28:07.697 00:12:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:28:07.697 00:12:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:28:07.697 00:12:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:28:07.697 00:12:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:28:07.697 00:12:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:28:07.697 00:12:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:28:07.697 00:12:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a --hostid=2a95e0b2-fe13-4bbc-a195-13b7fe2f640a -a 10.0.0.1 -t tcp -s 4420 00:28:07.697 00:28:07.697 Discovery Log Number of Records 2, Generation counter 2 00:28:07.697 =====Discovery Log Entry 0====== 00:28:07.697 trtype: tcp 00:28:07.697 adrfam: ipv4 00:28:07.697 subtype: current discovery subsystem 00:28:07.697 treq: not specified, sq flow control disable supported 00:28:07.697 portid: 1 00:28:07.697 trsvcid: 4420 00:28:07.697 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:28:07.697 traddr: 10.0.0.1 00:28:07.697 eflags: none 00:28:07.697 sectype: none 00:28:07.697 =====Discovery Log Entry 1====== 00:28:07.697 trtype: tcp 00:28:07.697 adrfam: ipv4 00:28:07.697 subtype: nvme subsystem 00:28:07.697 treq: not specified, sq flow control disable supported 00:28:07.697 portid: 1 00:28:07.697 trsvcid: 4420 00:28:07.697 subnqn: nqn.2016-06.io.spdk:testnqn 00:28:07.697 traddr: 10.0.0.1 00:28:07.697 eflags: none 00:28:07.697 sectype: none 00:28:07.697 00:12:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:28:07.697 00:12:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:28:07.697 00:12:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:28:07.697 00:12:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:28:07.697 00:12:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:28:07.697 00:12:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:28:07.697 00:12:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:28:07.697 00:12:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:28:07.697 00:12:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:28:07.697 00:12:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:07.697 00:12:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:28:07.697 00:12:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:07.697 00:12:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:28:07.697 00:12:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:07.697 00:12:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:28:07.697 00:12:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:07.697 00:12:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:28:07.697 00:12:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:07.697 00:12:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:07.697 00:12:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:07.697 00:12:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:10.986 Initializing NVMe Controllers 00:28:10.986 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:28:10.986 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:10.986 Initialization complete. Launching workers. 00:28:10.986 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 25040, failed: 0 00:28:10.986 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 25040, failed to submit 0 00:28:10.986 success 0, unsuccessful 25040, failed 0 00:28:10.986 00:12:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:10.986 00:12:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:14.274 Initializing NVMe Controllers 00:28:14.274 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:28:14.274 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:14.274 Initialization complete. Launching workers. 00:28:14.274 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 56806, failed: 0 00:28:14.274 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 23166, failed to submit 33640 00:28:14.274 success 0, unsuccessful 23166, failed 0 00:28:14.274 00:12:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:14.274 00:12:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:17.561 Initializing NVMe Controllers 00:28:17.562 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:28:17.562 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:17.562 Initialization complete. Launching workers. 00:28:17.562 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 62109, failed: 0 00:28:17.562 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 15564, failed to submit 46545 00:28:17.562 success 0, unsuccessful 15564, failed 0 00:28:17.562 00:12:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:28:17.562 00:12:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:28:17.562 00:12:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:28:17.562 00:12:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:17.562 00:12:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:17.562 00:12:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:28:17.562 00:12:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:17.562 00:12:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:28:17.562 00:12:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:28:17.562 00:12:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:28:18.499 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:19.068 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:28:19.068 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:28:19.068 00:28:19.068 real 0m12.400s 00:28:19.068 user 0m6.414s 00:28:19.068 sys 0m3.674s 00:28:19.068 00:12:25 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:19.068 00:12:25 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:19.068 ************************************ 00:28:19.068 END TEST kernel_target_abort 00:28:19.068 ************************************ 00:28:19.068 00:12:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:28:19.068 00:12:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:28:19.068 00:12:25 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:19.068 00:12:25 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:28:19.068 00:12:25 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:19.068 00:12:25 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:28:19.068 00:12:25 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:19.068 00:12:25 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:19.068 rmmod nvme_tcp 00:28:19.068 rmmod nvme_fabrics 00:28:19.068 rmmod nvme_keyring 00:28:19.068 00:12:25 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:19.068 00:12:25 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:28:19.068 00:12:25 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:28:19.068 00:12:25 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 90567 ']' 00:28:19.068 00:12:25 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 90567 00:28:19.068 00:12:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 90567 ']' 00:28:19.068 00:12:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 90567 00:28:19.068 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (90567) - No such process 00:28:19.068 Process with pid 90567 is not found 00:28:19.068 00:12:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 90567 is not found' 00:28:19.068 00:12:25 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:28:19.068 00:12:25 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:28:19.637 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:19.637 Waiting for block devices as requested 00:28:19.637 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:28:19.637 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:28:19.637 00:12:26 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:19.637 00:12:26 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:19.637 00:12:26 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:28:19.637 00:12:26 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:28:19.637 00:12:26 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:19.637 00:12:26 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:28:19.637 00:12:26 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:19.637 00:12:26 nvmf_abort_qd_sizes -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:28:19.637 00:12:26 nvmf_abort_qd_sizes -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:28:19.637 00:12:26 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:28:19.897 00:12:26 nvmf_abort_qd_sizes -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:28:19.897 00:12:26 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:28:19.897 00:12:26 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:28:19.897 00:12:26 nvmf_abort_qd_sizes -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:28:19.897 00:12:26 nvmf_abort_qd_sizes -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:28:19.897 00:12:26 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:28:19.897 00:12:26 nvmf_abort_qd_sizes -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:28:19.897 00:12:26 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:28:19.897 00:12:26 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:28:19.897 00:12:26 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:19.897 00:12:26 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:19.897 00:12:26 nvmf_abort_qd_sizes -- nvmf/common.sh@246 -- # remove_spdk_ns 00:28:19.897 00:12:26 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:19.897 00:12:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:19.897 00:12:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:19.897 00:12:26 nvmf_abort_qd_sizes -- nvmf/common.sh@300 -- # return 0 00:28:20.156 ************************************ 00:28:20.156 END TEST nvmf_abort_qd_sizes 00:28:20.156 ************************************ 00:28:20.156 00:28:20.156 real 0m27.534s 00:28:20.156 user 0m53.414s 00:28:20.156 sys 0m7.421s 00:28:20.156 00:12:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:20.156 00:12:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:20.156 00:12:26 -- spdk/autotest.sh@292 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:28:20.156 00:12:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:20.156 00:12:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:20.156 00:12:26 -- common/autotest_common.sh@10 -- # set +x 00:28:20.156 ************************************ 00:28:20.156 START TEST keyring_file 00:28:20.156 ************************************ 00:28:20.156 00:12:26 keyring_file -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:28:20.156 * Looking for test storage... 00:28:20.156 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:28:20.156 00:12:26 keyring_file -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:20.156 00:12:26 keyring_file -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:20.156 00:12:26 keyring_file -- common/autotest_common.sh@1693 -- # lcov --version 00:28:20.417 00:12:26 keyring_file -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:20.417 00:12:26 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:20.417 00:12:26 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:20.417 00:12:26 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:20.417 00:12:26 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:28:20.417 00:12:26 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:28:20.417 00:12:26 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:28:20.417 00:12:26 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:28:20.417 00:12:26 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:28:20.417 00:12:26 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:28:20.417 00:12:26 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:28:20.417 00:12:26 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:20.417 00:12:26 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:28:20.417 00:12:26 keyring_file -- scripts/common.sh@345 -- # : 1 00:28:20.417 00:12:26 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:20.417 00:12:26 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:20.417 00:12:26 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:28:20.417 00:12:26 keyring_file -- scripts/common.sh@353 -- # local d=1 00:28:20.417 00:12:26 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:20.417 00:12:26 keyring_file -- scripts/common.sh@355 -- # echo 1 00:28:20.417 00:12:26 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:28:20.417 00:12:26 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:28:20.417 00:12:26 keyring_file -- scripts/common.sh@353 -- # local d=2 00:28:20.417 00:12:26 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:20.417 00:12:26 keyring_file -- scripts/common.sh@355 -- # echo 2 00:28:20.417 00:12:26 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:28:20.417 00:12:26 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:20.417 00:12:26 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:20.417 00:12:26 keyring_file -- scripts/common.sh@368 -- # return 0 00:28:20.417 00:12:26 keyring_file -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:20.417 00:12:26 keyring_file -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:20.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:20.417 --rc genhtml_branch_coverage=1 00:28:20.417 --rc genhtml_function_coverage=1 00:28:20.417 --rc genhtml_legend=1 00:28:20.417 --rc geninfo_all_blocks=1 00:28:20.417 --rc geninfo_unexecuted_blocks=1 00:28:20.417 00:28:20.417 ' 00:28:20.417 00:12:26 keyring_file -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:20.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:20.417 --rc genhtml_branch_coverage=1 00:28:20.417 --rc genhtml_function_coverage=1 00:28:20.417 --rc genhtml_legend=1 00:28:20.417 --rc geninfo_all_blocks=1 00:28:20.417 --rc geninfo_unexecuted_blocks=1 00:28:20.417 00:28:20.417 ' 00:28:20.417 00:12:26 keyring_file -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:20.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:20.417 --rc genhtml_branch_coverage=1 00:28:20.417 --rc genhtml_function_coverage=1 00:28:20.417 --rc genhtml_legend=1 00:28:20.417 --rc geninfo_all_blocks=1 00:28:20.417 --rc geninfo_unexecuted_blocks=1 00:28:20.417 00:28:20.417 ' 00:28:20.417 00:12:26 keyring_file -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:20.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:20.417 --rc genhtml_branch_coverage=1 00:28:20.417 --rc genhtml_function_coverage=1 00:28:20.417 --rc genhtml_legend=1 00:28:20.417 --rc geninfo_all_blocks=1 00:28:20.417 --rc geninfo_unexecuted_blocks=1 00:28:20.417 00:28:20.417 ' 00:28:20.417 00:12:26 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:28:20.417 00:12:26 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:20.417 00:12:26 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:28:20.417 00:12:26 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:20.417 00:12:26 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:20.417 00:12:26 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:20.417 00:12:26 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:20.417 00:12:26 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:20.417 00:12:26 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:20.417 00:12:26 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:20.417 00:12:26 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:20.417 00:12:26 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:20.417 00:12:26 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:20.417 00:12:26 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:28:20.417 00:12:26 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:28:20.417 00:12:26 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:20.417 00:12:26 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:20.417 00:12:26 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:20.417 00:12:26 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:20.417 00:12:26 keyring_file -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:20.417 00:12:26 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:28:20.417 00:12:26 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:20.417 00:12:26 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:20.417 00:12:26 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:20.417 00:12:26 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:20.417 00:12:26 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:20.417 00:12:26 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:20.417 00:12:26 keyring_file -- paths/export.sh@5 -- # export PATH 00:28:20.417 00:12:26 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:20.417 00:12:26 keyring_file -- nvmf/common.sh@51 -- # : 0 00:28:20.417 00:12:26 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:20.417 00:12:26 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:20.417 00:12:26 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:20.417 00:12:26 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:20.417 00:12:26 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:20.417 00:12:26 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:20.418 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:20.418 00:12:26 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:20.418 00:12:26 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:20.418 00:12:26 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:20.418 00:12:26 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:28:20.418 00:12:26 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:28:20.418 00:12:26 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:28:20.418 00:12:26 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:28:20.418 00:12:26 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:28:20.418 00:12:26 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:28:20.418 00:12:26 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:28:20.418 00:12:26 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:28:20.418 00:12:26 keyring_file -- keyring/common.sh@17 -- # name=key0 00:28:20.418 00:12:26 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:28:20.418 00:12:26 keyring_file -- keyring/common.sh@17 -- # digest=0 00:28:20.418 00:12:26 keyring_file -- keyring/common.sh@18 -- # mktemp 00:28:20.418 00:12:26 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.tHz40LO6gN 00:28:20.418 00:12:26 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:28:20.418 00:12:26 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:28:20.418 00:12:26 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:28:20.418 00:12:26 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:28:20.418 00:12:26 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:28:20.418 00:12:26 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:28:20.418 00:12:26 keyring_file -- nvmf/common.sh@733 -- # python - 00:28:20.418 00:12:26 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.tHz40LO6gN 00:28:20.418 00:12:26 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.tHz40LO6gN 00:28:20.418 00:12:26 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.tHz40LO6gN 00:28:20.418 00:12:26 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:28:20.418 00:12:26 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:28:20.418 00:12:26 keyring_file -- keyring/common.sh@17 -- # name=key1 00:28:20.418 00:12:26 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:28:20.418 00:12:26 keyring_file -- keyring/common.sh@17 -- # digest=0 00:28:20.418 00:12:26 keyring_file -- keyring/common.sh@18 -- # mktemp 00:28:20.418 00:12:26 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.9mfrwnAqO8 00:28:20.418 00:12:26 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:28:20.418 00:12:26 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:28:20.418 00:12:26 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:28:20.418 00:12:26 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:28:20.418 00:12:26 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:28:20.418 00:12:26 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:28:20.418 00:12:26 keyring_file -- nvmf/common.sh@733 -- # python - 00:28:20.418 00:12:26 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.9mfrwnAqO8 00:28:20.418 00:12:26 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.9mfrwnAqO8 00:28:20.418 00:12:26 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.9mfrwnAqO8 00:28:20.418 00:12:26 keyring_file -- keyring/file.sh@30 -- # tgtpid=91594 00:28:20.418 00:12:26 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:20.418 00:12:26 keyring_file -- keyring/file.sh@32 -- # waitforlisten 91594 00:28:20.418 00:12:26 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 91594 ']' 00:28:20.418 00:12:26 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:20.418 00:12:26 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:20.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:20.418 00:12:26 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:20.418 00:12:26 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:20.418 00:12:26 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:20.678 [2024-11-19 00:12:27.129853] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:28:20.678 [2024-11-19 00:12:27.130054] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91594 ] 00:28:20.678 [2024-11-19 00:12:27.319040] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:20.941 [2024-11-19 00:12:27.445220] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:21.200 [2024-11-19 00:12:27.713082] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:28:21.767 00:12:28 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:21.767 00:12:28 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:28:21.767 00:12:28 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:28:21.767 00:12:28 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.767 00:12:28 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:21.767 [2024-11-19 00:12:28.191937] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:21.767 null0 00:28:21.767 [2024-11-19 00:12:28.223912] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:28:21.767 [2024-11-19 00:12:28.224230] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:28:21.767 00:12:28 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.767 00:12:28 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:28:21.767 00:12:28 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:28:21.767 00:12:28 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:28:21.767 00:12:28 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:21.767 00:12:28 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:21.767 00:12:28 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:21.767 00:12:28 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:21.767 00:12:28 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:28:21.767 00:12:28 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.767 00:12:28 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:21.767 [2024-11-19 00:12:28.251915] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:28:21.767 request: 00:28:21.767 { 00:28:21.767 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:28:21.767 "secure_channel": false, 00:28:21.767 "listen_address": { 00:28:21.767 "trtype": "tcp", 00:28:21.767 "traddr": "127.0.0.1", 00:28:21.767 "trsvcid": "4420" 00:28:21.767 }, 00:28:21.767 "method": "nvmf_subsystem_add_listener", 00:28:21.767 "req_id": 1 00:28:21.767 } 00:28:21.767 Got JSON-RPC error response 00:28:21.767 response: 00:28:21.767 { 00:28:21.767 "code": -32602, 00:28:21.767 "message": "Invalid parameters" 00:28:21.767 } 00:28:21.767 00:12:28 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:21.767 00:12:28 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:28:21.767 00:12:28 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:21.767 00:12:28 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:21.767 00:12:28 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:21.767 00:12:28 keyring_file -- keyring/file.sh@47 -- # bperfpid=91611 00:28:21.767 00:12:28 keyring_file -- keyring/file.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:28:21.767 00:12:28 keyring_file -- keyring/file.sh@49 -- # waitforlisten 91611 /var/tmp/bperf.sock 00:28:21.767 00:12:28 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 91611 ']' 00:28:21.767 00:12:28 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:21.767 00:12:28 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:21.767 00:12:28 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:21.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:21.767 00:12:28 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:21.767 00:12:28 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:21.767 [2024-11-19 00:12:28.372910] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:28:21.767 [2024-11-19 00:12:28.373109] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91611 ] 00:28:22.026 [2024-11-19 00:12:28.551451] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:22.026 [2024-11-19 00:12:28.636723] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:22.285 [2024-11-19 00:12:28.791238] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:28:22.853 00:12:29 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:22.853 00:12:29 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:28:22.853 00:12:29 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.tHz40LO6gN 00:28:22.853 00:12:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.tHz40LO6gN 00:28:22.853 00:12:29 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.9mfrwnAqO8 00:28:22.853 00:12:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.9mfrwnAqO8 00:28:23.112 00:12:29 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:28:23.112 00:12:29 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:28:23.112 00:12:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:23.112 00:12:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:23.112 00:12:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:23.681 00:12:30 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.tHz40LO6gN == \/\t\m\p\/\t\m\p\.\t\H\z\4\0\L\O\6\g\N ]] 00:28:23.681 00:12:30 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:28:23.681 00:12:30 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:28:23.681 00:12:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:23.681 00:12:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:23.681 00:12:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:23.941 00:12:30 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.9mfrwnAqO8 == \/\t\m\p\/\t\m\p\.\9\m\f\r\w\n\A\q\O\8 ]] 00:28:23.941 00:12:30 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:28:23.941 00:12:30 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:23.941 00:12:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:23.941 00:12:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:23.941 00:12:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:23.941 00:12:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:24.200 00:12:30 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:28:24.200 00:12:30 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:28:24.200 00:12:30 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:28:24.200 00:12:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:24.200 00:12:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:24.200 00:12:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:24.200 00:12:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:24.460 00:12:30 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:28:24.460 00:12:30 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:24.460 00:12:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:24.720 [2024-11-19 00:12:31.207913] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:28:24.720 nvme0n1 00:28:24.720 00:12:31 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:28:24.720 00:12:31 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:24.720 00:12:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:24.720 00:12:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:24.720 00:12:31 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:24.720 00:12:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:24.980 00:12:31 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:28:24.980 00:12:31 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:28:24.980 00:12:31 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:28:24.980 00:12:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:24.980 00:12:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:24.980 00:12:31 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:24.980 00:12:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:25.239 00:12:31 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:28:25.239 00:12:31 keyring_file -- keyring/file.sh@63 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:25.239 Running I/O for 1 seconds... 00:28:26.619 9682.00 IOPS, 37.82 MiB/s 00:28:26.619 Latency(us) 00:28:26.619 [2024-11-19T00:12:33.311Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:26.619 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:28:26.619 nvme0n1 : 1.01 9735.81 38.03 0.00 0.00 13104.03 5093.93 22878.02 00:28:26.619 [2024-11-19T00:12:33.311Z] =================================================================================================================== 00:28:26.619 [2024-11-19T00:12:33.311Z] Total : 9735.81 38.03 0.00 0.00 13104.03 5093.93 22878.02 00:28:26.619 { 00:28:26.619 "results": [ 00:28:26.619 { 00:28:26.619 "job": "nvme0n1", 00:28:26.619 "core_mask": "0x2", 00:28:26.619 "workload": "randrw", 00:28:26.619 "percentage": 50, 00:28:26.619 "status": "finished", 00:28:26.619 "queue_depth": 128, 00:28:26.619 "io_size": 4096, 00:28:26.619 "runtime": 1.007723, 00:28:26.619 "iops": 9735.810336769133, 00:28:26.619 "mibps": 38.030509128004425, 00:28:26.619 "io_failed": 0, 00:28:26.619 "io_timeout": 0, 00:28:26.619 "avg_latency_us": 13104.034662391936, 00:28:26.619 "min_latency_us": 5093.9345454545455, 00:28:26.619 "max_latency_us": 22878.02181818182 00:28:26.619 } 00:28:26.619 ], 00:28:26.619 "core_count": 1 00:28:26.619 } 00:28:26.619 00:12:32 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:28:26.619 00:12:32 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:28:26.619 00:12:33 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:28:26.619 00:12:33 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:26.619 00:12:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:26.619 00:12:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:26.619 00:12:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:26.619 00:12:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:26.878 00:12:33 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:28:26.878 00:12:33 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:28:26.878 00:12:33 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:28:26.878 00:12:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:26.878 00:12:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:26.878 00:12:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:26.878 00:12:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:27.138 00:12:33 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:28:27.138 00:12:33 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:28:27.138 00:12:33 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:28:27.138 00:12:33 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:28:27.138 00:12:33 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:28:27.138 00:12:33 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:27.138 00:12:33 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:28:27.138 00:12:33 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:27.138 00:12:33 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:28:27.138 00:12:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:28:27.398 [2024-11-19 00:12:34.001025] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:28:27.398 [2024-11-19 00:12:34.001682] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000030280 (107): Transport endpoint is not connected 00:28:27.398 [2024-11-19 00:12:34.002647] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000030280 (9): Bad file descriptor 00:28:27.398 [2024-11-19 00:12:34.003624] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:28:27.398 [2024-11-19 00:12:34.003685] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:28:27.398 [2024-11-19 00:12:34.003703] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:28:27.398 [2024-11-19 00:12:34.003718] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:28:27.398 request: 00:28:27.398 { 00:28:27.398 "name": "nvme0", 00:28:27.398 "trtype": "tcp", 00:28:27.398 "traddr": "127.0.0.1", 00:28:27.398 "adrfam": "ipv4", 00:28:27.398 "trsvcid": "4420", 00:28:27.398 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:27.398 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:27.398 "prchk_reftag": false, 00:28:27.398 "prchk_guard": false, 00:28:27.398 "hdgst": false, 00:28:27.398 "ddgst": false, 00:28:27.398 "psk": "key1", 00:28:27.398 "allow_unrecognized_csi": false, 00:28:27.398 "method": "bdev_nvme_attach_controller", 00:28:27.398 "req_id": 1 00:28:27.398 } 00:28:27.398 Got JSON-RPC error response 00:28:27.398 response: 00:28:27.398 { 00:28:27.398 "code": -5, 00:28:27.398 "message": "Input/output error" 00:28:27.398 } 00:28:27.398 00:12:34 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:28:27.398 00:12:34 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:27.398 00:12:34 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:27.398 00:12:34 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:27.398 00:12:34 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:28:27.398 00:12:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:27.398 00:12:34 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:27.398 00:12:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:27.398 00:12:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:27.398 00:12:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:27.657 00:12:34 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:28:27.657 00:12:34 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:28:27.657 00:12:34 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:28:27.657 00:12:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:27.657 00:12:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:27.657 00:12:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:27.657 00:12:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:27.916 00:12:34 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:28:27.916 00:12:34 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:28:27.916 00:12:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:28:28.175 00:12:34 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:28:28.175 00:12:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:28:28.436 00:12:35 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:28:28.436 00:12:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:28.436 00:12:35 keyring_file -- keyring/file.sh@78 -- # jq length 00:28:28.709 00:12:35 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:28:28.709 00:12:35 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.tHz40LO6gN 00:28:28.709 00:12:35 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.tHz40LO6gN 00:28:28.709 00:12:35 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:28:28.709 00:12:35 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.tHz40LO6gN 00:28:28.709 00:12:35 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:28:28.709 00:12:35 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:28.709 00:12:35 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:28:28.709 00:12:35 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:28.709 00:12:35 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.tHz40LO6gN 00:28:28.709 00:12:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.tHz40LO6gN 00:28:28.984 [2024-11-19 00:12:35.521948] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.tHz40LO6gN': 0100660 00:28:28.984 [2024-11-19 00:12:35.522017] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:28:28.984 request: 00:28:28.984 { 00:28:28.984 "name": "key0", 00:28:28.984 "path": "/tmp/tmp.tHz40LO6gN", 00:28:28.984 "method": "keyring_file_add_key", 00:28:28.984 "req_id": 1 00:28:28.984 } 00:28:28.984 Got JSON-RPC error response 00:28:28.984 response: 00:28:28.984 { 00:28:28.984 "code": -1, 00:28:28.984 "message": "Operation not permitted" 00:28:28.984 } 00:28:28.984 00:12:35 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:28:28.984 00:12:35 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:28.984 00:12:35 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:28.984 00:12:35 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:28.984 00:12:35 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.tHz40LO6gN 00:28:28.984 00:12:35 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.tHz40LO6gN 00:28:28.984 00:12:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.tHz40LO6gN 00:28:29.243 00:12:35 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.tHz40LO6gN 00:28:29.243 00:12:35 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:28:29.243 00:12:35 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:29.243 00:12:35 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:29.243 00:12:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:29.243 00:12:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:29.243 00:12:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:29.503 00:12:36 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:28:29.503 00:12:36 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:29.503 00:12:36 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:28:29.503 00:12:36 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:29.503 00:12:36 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:28:29.503 00:12:36 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:29.503 00:12:36 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:28:29.503 00:12:36 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:29.503 00:12:36 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:29.503 00:12:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:29.762 [2024-11-19 00:12:36.306227] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.tHz40LO6gN': No such file or directory 00:28:29.762 [2024-11-19 00:12:36.306298] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:28:29.762 [2024-11-19 00:12:36.306344] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:28:29.762 [2024-11-19 00:12:36.306358] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:28:29.762 [2024-11-19 00:12:36.306371] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:28:29.762 [2024-11-19 00:12:36.306384] bdev_nvme.c:6669:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:28:29.762 request: 00:28:29.762 { 00:28:29.762 "name": "nvme0", 00:28:29.762 "trtype": "tcp", 00:28:29.762 "traddr": "127.0.0.1", 00:28:29.762 "adrfam": "ipv4", 00:28:29.762 "trsvcid": "4420", 00:28:29.762 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:29.762 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:29.762 "prchk_reftag": false, 00:28:29.762 "prchk_guard": false, 00:28:29.762 "hdgst": false, 00:28:29.762 "ddgst": false, 00:28:29.762 "psk": "key0", 00:28:29.762 "allow_unrecognized_csi": false, 00:28:29.762 "method": "bdev_nvme_attach_controller", 00:28:29.762 "req_id": 1 00:28:29.762 } 00:28:29.762 Got JSON-RPC error response 00:28:29.762 response: 00:28:29.762 { 00:28:29.762 "code": -19, 00:28:29.762 "message": "No such device" 00:28:29.762 } 00:28:29.762 00:12:36 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:28:29.762 00:12:36 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:29.763 00:12:36 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:29.763 00:12:36 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:29.763 00:12:36 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:28:29.763 00:12:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:28:30.022 00:12:36 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:28:30.022 00:12:36 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:28:30.022 00:12:36 keyring_file -- keyring/common.sh@17 -- # name=key0 00:28:30.022 00:12:36 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:28:30.022 00:12:36 keyring_file -- keyring/common.sh@17 -- # digest=0 00:28:30.022 00:12:36 keyring_file -- keyring/common.sh@18 -- # mktemp 00:28:30.022 00:12:36 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.GCEEvjY9RB 00:28:30.022 00:12:36 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:28:30.022 00:12:36 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:28:30.022 00:12:36 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:28:30.022 00:12:36 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:28:30.022 00:12:36 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:28:30.022 00:12:36 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:28:30.022 00:12:36 keyring_file -- nvmf/common.sh@733 -- # python - 00:28:30.022 00:12:36 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.GCEEvjY9RB 00:28:30.022 00:12:36 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.GCEEvjY9RB 00:28:30.022 00:12:36 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.GCEEvjY9RB 00:28:30.022 00:12:36 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.GCEEvjY9RB 00:28:30.022 00:12:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.GCEEvjY9RB 00:28:30.282 00:12:36 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:30.282 00:12:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:30.851 nvme0n1 00:28:30.851 00:12:37 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:28:30.851 00:12:37 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:30.851 00:12:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:30.851 00:12:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:30.851 00:12:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:30.851 00:12:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:30.851 00:12:37 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:28:30.851 00:12:37 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:28:30.851 00:12:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:28:31.111 00:12:37 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:28:31.111 00:12:37 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:28:31.111 00:12:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:31.111 00:12:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:31.111 00:12:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:31.371 00:12:37 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:28:31.371 00:12:37 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:28:31.371 00:12:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:31.371 00:12:37 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:31.371 00:12:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:31.371 00:12:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:31.371 00:12:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:31.631 00:12:38 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:28:31.631 00:12:38 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:28:31.631 00:12:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:28:31.889 00:12:38 keyring_file -- keyring/file.sh@105 -- # jq length 00:28:31.889 00:12:38 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:28:31.889 00:12:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:32.147 00:12:38 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:28:32.147 00:12:38 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.GCEEvjY9RB 00:28:32.147 00:12:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.GCEEvjY9RB 00:28:32.406 00:12:39 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.9mfrwnAqO8 00:28:32.406 00:12:39 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.9mfrwnAqO8 00:28:32.664 00:12:39 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:32.664 00:12:39 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:32.922 nvme0n1 00:28:32.922 00:12:39 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:28:32.922 00:12:39 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:28:33.532 00:12:39 keyring_file -- keyring/file.sh@113 -- # config='{ 00:28:33.532 "subsystems": [ 00:28:33.532 { 00:28:33.532 "subsystem": "keyring", 00:28:33.532 "config": [ 00:28:33.532 { 00:28:33.532 "method": "keyring_file_add_key", 00:28:33.532 "params": { 00:28:33.532 "name": "key0", 00:28:33.532 "path": "/tmp/tmp.GCEEvjY9RB" 00:28:33.532 } 00:28:33.532 }, 00:28:33.532 { 00:28:33.532 "method": "keyring_file_add_key", 00:28:33.532 "params": { 00:28:33.532 "name": "key1", 00:28:33.532 "path": "/tmp/tmp.9mfrwnAqO8" 00:28:33.532 } 00:28:33.532 } 00:28:33.532 ] 00:28:33.532 }, 00:28:33.532 { 00:28:33.532 "subsystem": "iobuf", 00:28:33.532 "config": [ 00:28:33.532 { 00:28:33.532 "method": "iobuf_set_options", 00:28:33.532 "params": { 00:28:33.532 "small_pool_count": 8192, 00:28:33.532 "large_pool_count": 1024, 00:28:33.532 "small_bufsize": 8192, 00:28:33.532 "large_bufsize": 135168, 00:28:33.532 "enable_numa": false 00:28:33.532 } 00:28:33.532 } 00:28:33.532 ] 00:28:33.532 }, 00:28:33.532 { 00:28:33.532 "subsystem": "sock", 00:28:33.532 "config": [ 00:28:33.532 { 00:28:33.532 "method": "sock_set_default_impl", 00:28:33.532 "params": { 00:28:33.532 "impl_name": "uring" 00:28:33.532 } 00:28:33.532 }, 00:28:33.532 { 00:28:33.532 "method": "sock_impl_set_options", 00:28:33.532 "params": { 00:28:33.532 "impl_name": "ssl", 00:28:33.532 "recv_buf_size": 4096, 00:28:33.532 "send_buf_size": 4096, 00:28:33.532 "enable_recv_pipe": true, 00:28:33.532 "enable_quickack": false, 00:28:33.532 "enable_placement_id": 0, 00:28:33.532 "enable_zerocopy_send_server": true, 00:28:33.532 "enable_zerocopy_send_client": false, 00:28:33.532 "zerocopy_threshold": 0, 00:28:33.532 "tls_version": 0, 00:28:33.532 "enable_ktls": false 00:28:33.532 } 00:28:33.532 }, 00:28:33.532 { 00:28:33.532 "method": "sock_impl_set_options", 00:28:33.532 "params": { 00:28:33.532 "impl_name": "posix", 00:28:33.532 "recv_buf_size": 2097152, 00:28:33.532 "send_buf_size": 2097152, 00:28:33.532 "enable_recv_pipe": true, 00:28:33.532 "enable_quickack": false, 00:28:33.532 "enable_placement_id": 0, 00:28:33.532 "enable_zerocopy_send_server": true, 00:28:33.532 "enable_zerocopy_send_client": false, 00:28:33.532 "zerocopy_threshold": 0, 00:28:33.532 "tls_version": 0, 00:28:33.532 "enable_ktls": false 00:28:33.532 } 00:28:33.532 }, 00:28:33.532 { 00:28:33.532 "method": "sock_impl_set_options", 00:28:33.532 "params": { 00:28:33.532 "impl_name": "uring", 00:28:33.532 "recv_buf_size": 2097152, 00:28:33.532 "send_buf_size": 2097152, 00:28:33.532 "enable_recv_pipe": true, 00:28:33.532 "enable_quickack": false, 00:28:33.532 "enable_placement_id": 0, 00:28:33.532 "enable_zerocopy_send_server": false, 00:28:33.532 "enable_zerocopy_send_client": false, 00:28:33.532 "zerocopy_threshold": 0, 00:28:33.532 "tls_version": 0, 00:28:33.532 "enable_ktls": false 00:28:33.532 } 00:28:33.532 } 00:28:33.532 ] 00:28:33.532 }, 00:28:33.532 { 00:28:33.532 "subsystem": "vmd", 00:28:33.532 "config": [] 00:28:33.532 }, 00:28:33.532 { 00:28:33.532 "subsystem": "accel", 00:28:33.532 "config": [ 00:28:33.532 { 00:28:33.532 "method": "accel_set_options", 00:28:33.532 "params": { 00:28:33.532 "small_cache_size": 128, 00:28:33.532 "large_cache_size": 16, 00:28:33.532 "task_count": 2048, 00:28:33.532 "sequence_count": 2048, 00:28:33.532 "buf_count": 2048 00:28:33.532 } 00:28:33.532 } 00:28:33.532 ] 00:28:33.532 }, 00:28:33.532 { 00:28:33.532 "subsystem": "bdev", 00:28:33.532 "config": [ 00:28:33.532 { 00:28:33.532 "method": "bdev_set_options", 00:28:33.532 "params": { 00:28:33.532 "bdev_io_pool_size": 65535, 00:28:33.532 "bdev_io_cache_size": 256, 00:28:33.532 "bdev_auto_examine": true, 00:28:33.532 "iobuf_small_cache_size": 128, 00:28:33.532 "iobuf_large_cache_size": 16 00:28:33.532 } 00:28:33.532 }, 00:28:33.532 { 00:28:33.532 "method": "bdev_raid_set_options", 00:28:33.532 "params": { 00:28:33.532 "process_window_size_kb": 1024, 00:28:33.532 "process_max_bandwidth_mb_sec": 0 00:28:33.532 } 00:28:33.532 }, 00:28:33.532 { 00:28:33.532 "method": "bdev_iscsi_set_options", 00:28:33.532 "params": { 00:28:33.532 "timeout_sec": 30 00:28:33.532 } 00:28:33.532 }, 00:28:33.532 { 00:28:33.532 "method": "bdev_nvme_set_options", 00:28:33.532 "params": { 00:28:33.532 "action_on_timeout": "none", 00:28:33.532 "timeout_us": 0, 00:28:33.532 "timeout_admin_us": 0, 00:28:33.532 "keep_alive_timeout_ms": 10000, 00:28:33.532 "arbitration_burst": 0, 00:28:33.532 "low_priority_weight": 0, 00:28:33.532 "medium_priority_weight": 0, 00:28:33.532 "high_priority_weight": 0, 00:28:33.532 "nvme_adminq_poll_period_us": 10000, 00:28:33.532 "nvme_ioq_poll_period_us": 0, 00:28:33.532 "io_queue_requests": 512, 00:28:33.532 "delay_cmd_submit": true, 00:28:33.532 "transport_retry_count": 4, 00:28:33.532 "bdev_retry_count": 3, 00:28:33.532 "transport_ack_timeout": 0, 00:28:33.532 "ctrlr_loss_timeout_sec": 0, 00:28:33.532 "reconnect_delay_sec": 0, 00:28:33.532 "fast_io_fail_timeout_sec": 0, 00:28:33.532 "disable_auto_failback": false, 00:28:33.532 "generate_uuids": false, 00:28:33.532 "transport_tos": 0, 00:28:33.532 "nvme_error_stat": false, 00:28:33.532 "rdma_srq_size": 0, 00:28:33.532 "io_path_stat": false, 00:28:33.532 "allow_accel_sequence": false, 00:28:33.532 "rdma_max_cq_size": 0, 00:28:33.532 "rdma_cm_event_timeout_ms": 0, 00:28:33.532 "dhchap_digests": [ 00:28:33.532 "sha256", 00:28:33.532 "sha384", 00:28:33.532 "sha512" 00:28:33.532 ], 00:28:33.532 "dhchap_dhgroups": [ 00:28:33.532 "null", 00:28:33.532 "ffdhe2048", 00:28:33.532 "ffdhe3072", 00:28:33.532 "ffdhe4096", 00:28:33.532 "ffdhe6144", 00:28:33.532 "ffdhe8192" 00:28:33.532 ] 00:28:33.532 } 00:28:33.532 }, 00:28:33.532 { 00:28:33.533 "method": "bdev_nvme_attach_controller", 00:28:33.533 "params": { 00:28:33.533 "name": "nvme0", 00:28:33.533 "trtype": "TCP", 00:28:33.533 "adrfam": "IPv4", 00:28:33.533 "traddr": "127.0.0.1", 00:28:33.533 "trsvcid": "4420", 00:28:33.533 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:33.533 "prchk_reftag": false, 00:28:33.533 "prchk_guard": false, 00:28:33.533 "ctrlr_loss_timeout_sec": 0, 00:28:33.533 "reconnect_delay_sec": 0, 00:28:33.533 "fast_io_fail_timeout_sec": 0, 00:28:33.533 "psk": "key0", 00:28:33.533 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:33.533 "hdgst": false, 00:28:33.533 "ddgst": false, 00:28:33.533 "multipath": "multipath" 00:28:33.533 } 00:28:33.533 }, 00:28:33.533 { 00:28:33.533 "method": "bdev_nvme_set_hotplug", 00:28:33.533 "params": { 00:28:33.533 "period_us": 100000, 00:28:33.533 "enable": false 00:28:33.533 } 00:28:33.533 }, 00:28:33.533 { 00:28:33.533 "method": "bdev_wait_for_examine" 00:28:33.533 } 00:28:33.533 ] 00:28:33.533 }, 00:28:33.533 { 00:28:33.533 "subsystem": "nbd", 00:28:33.533 "config": [] 00:28:33.533 } 00:28:33.533 ] 00:28:33.533 }' 00:28:33.533 00:12:39 keyring_file -- keyring/file.sh@115 -- # killprocess 91611 00:28:33.533 00:12:39 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 91611 ']' 00:28:33.533 00:12:39 keyring_file -- common/autotest_common.sh@958 -- # kill -0 91611 00:28:33.533 00:12:39 keyring_file -- common/autotest_common.sh@959 -- # uname 00:28:33.533 00:12:39 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:33.533 00:12:39 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91611 00:28:33.533 killing process with pid 91611 00:28:33.533 Received shutdown signal, test time was about 1.000000 seconds 00:28:33.533 00:28:33.533 Latency(us) 00:28:33.533 [2024-11-19T00:12:40.225Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:33.533 [2024-11-19T00:12:40.225Z] =================================================================================================================== 00:28:33.533 [2024-11-19T00:12:40.225Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:33.533 00:12:39 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:33.533 00:12:39 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:33.533 00:12:39 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91611' 00:28:33.533 00:12:39 keyring_file -- common/autotest_common.sh@973 -- # kill 91611 00:28:33.533 00:12:39 keyring_file -- common/autotest_common.sh@978 -- # wait 91611 00:28:34.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:34.102 00:12:40 keyring_file -- keyring/file.sh@118 -- # bperfpid=91864 00:28:34.102 00:12:40 keyring_file -- keyring/file.sh@120 -- # waitforlisten 91864 /var/tmp/bperf.sock 00:28:34.102 00:12:40 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 91864 ']' 00:28:34.102 00:12:40 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:34.102 00:12:40 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:34.102 00:12:40 keyring_file -- keyring/file.sh@116 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:28:34.102 00:12:40 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:34.102 00:12:40 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:34.102 00:12:40 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:34.102 00:12:40 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:28:34.102 "subsystems": [ 00:28:34.102 { 00:28:34.102 "subsystem": "keyring", 00:28:34.102 "config": [ 00:28:34.102 { 00:28:34.102 "method": "keyring_file_add_key", 00:28:34.102 "params": { 00:28:34.102 "name": "key0", 00:28:34.102 "path": "/tmp/tmp.GCEEvjY9RB" 00:28:34.102 } 00:28:34.102 }, 00:28:34.102 { 00:28:34.102 "method": "keyring_file_add_key", 00:28:34.102 "params": { 00:28:34.102 "name": "key1", 00:28:34.102 "path": "/tmp/tmp.9mfrwnAqO8" 00:28:34.102 } 00:28:34.102 } 00:28:34.102 ] 00:28:34.102 }, 00:28:34.102 { 00:28:34.102 "subsystem": "iobuf", 00:28:34.102 "config": [ 00:28:34.102 { 00:28:34.102 "method": "iobuf_set_options", 00:28:34.102 "params": { 00:28:34.102 "small_pool_count": 8192, 00:28:34.102 "large_pool_count": 1024, 00:28:34.102 "small_bufsize": 8192, 00:28:34.102 "large_bufsize": 135168, 00:28:34.102 "enable_numa": false 00:28:34.102 } 00:28:34.102 } 00:28:34.102 ] 00:28:34.102 }, 00:28:34.102 { 00:28:34.102 "subsystem": "sock", 00:28:34.102 "config": [ 00:28:34.102 { 00:28:34.102 "method": "sock_set_default_impl", 00:28:34.102 "params": { 00:28:34.102 "impl_name": "uring" 00:28:34.102 } 00:28:34.102 }, 00:28:34.102 { 00:28:34.102 "method": "sock_impl_set_options", 00:28:34.102 "params": { 00:28:34.102 "impl_name": "ssl", 00:28:34.102 "recv_buf_size": 4096, 00:28:34.102 "send_buf_size": 4096, 00:28:34.102 "enable_recv_pipe": true, 00:28:34.102 "enable_quickack": false, 00:28:34.102 "enable_placement_id": 0, 00:28:34.102 "enable_zerocopy_send_server": true, 00:28:34.102 "enable_zerocopy_send_client": false, 00:28:34.102 "zerocopy_threshold": 0, 00:28:34.102 "tls_version": 0, 00:28:34.102 "enable_ktls": false 00:28:34.102 } 00:28:34.102 }, 00:28:34.102 { 00:28:34.102 "method": "sock_impl_set_options", 00:28:34.102 "params": { 00:28:34.102 "impl_name": "posix", 00:28:34.102 "recv_buf_size": 2097152, 00:28:34.102 "send_buf_size": 2097152, 00:28:34.102 "enable_recv_pipe": true, 00:28:34.102 "enable_quickack": false, 00:28:34.102 "enable_placement_id": 0, 00:28:34.102 "enable_zerocopy_send_server": true, 00:28:34.102 "enable_zerocopy_send_client": false, 00:28:34.102 "zerocopy_threshold": 0, 00:28:34.102 "tls_version": 0, 00:28:34.102 "enable_ktls": false 00:28:34.102 } 00:28:34.102 }, 00:28:34.102 { 00:28:34.102 "method": "sock_impl_set_options", 00:28:34.102 "params": { 00:28:34.102 "impl_name": "uring", 00:28:34.102 "recv_buf_size": 2097152, 00:28:34.102 "send_buf_size": 2097152, 00:28:34.102 "enable_recv_pipe": true, 00:28:34.102 "enable_quickack": false, 00:28:34.102 "enable_placement_id": 0, 00:28:34.102 "enable_zerocopy_send_server": false, 00:28:34.102 "enable_zerocopy_send_client": false, 00:28:34.102 "zerocopy_threshold": 0, 00:28:34.102 "tls_version": 0, 00:28:34.102 "enable_ktls": false 00:28:34.102 } 00:28:34.102 } 00:28:34.102 ] 00:28:34.102 }, 00:28:34.102 { 00:28:34.102 "subsystem": "vmd", 00:28:34.102 "config": [] 00:28:34.102 }, 00:28:34.102 { 00:28:34.102 "subsystem": "accel", 00:28:34.102 "config": [ 00:28:34.102 { 00:28:34.102 "method": "accel_set_options", 00:28:34.102 "params": { 00:28:34.102 "small_cache_size": 128, 00:28:34.102 "large_cache_size": 16, 00:28:34.102 "task_count": 2048, 00:28:34.102 "sequence_count": 2048, 00:28:34.102 "buf_count": 2048 00:28:34.102 } 00:28:34.102 } 00:28:34.102 ] 00:28:34.102 }, 00:28:34.102 { 00:28:34.102 "subsystem": "bdev", 00:28:34.102 "config": [ 00:28:34.102 { 00:28:34.102 "method": "bdev_set_options", 00:28:34.102 "params": { 00:28:34.102 "bdev_io_pool_size": 65535, 00:28:34.102 "bdev_io_cache_size": 256, 00:28:34.102 "bdev_auto_examine": true, 00:28:34.102 "iobuf_small_cache_size": 128, 00:28:34.102 "iobuf_large_cache_size": 16 00:28:34.102 } 00:28:34.102 }, 00:28:34.102 { 00:28:34.102 "method": "bdev_raid_set_options", 00:28:34.102 "params": { 00:28:34.102 "process_window_size_kb": 1024, 00:28:34.102 "process_max_bandwidth_mb_sec": 0 00:28:34.102 } 00:28:34.102 }, 00:28:34.102 { 00:28:34.102 "method": "bdev_iscsi_set_options", 00:28:34.102 "params": { 00:28:34.103 "timeout_sec": 30 00:28:34.103 } 00:28:34.103 }, 00:28:34.103 { 00:28:34.103 "method": "bdev_nvme_set_options", 00:28:34.103 "params": { 00:28:34.103 "action_on_timeout": "none", 00:28:34.103 "timeout_us": 0, 00:28:34.103 "timeout_admin_us": 0, 00:28:34.103 "keep_alive_timeout_ms": 10000, 00:28:34.103 "arbitration_burst": 0, 00:28:34.103 "low_priority_weight": 0, 00:28:34.103 "medium_priority_weight": 0, 00:28:34.103 "high_priority_weight": 0, 00:28:34.103 "nvme_adminq_poll_period_us": 10000, 00:28:34.103 "nvme_ioq_poll_period_us": 0, 00:28:34.103 "io_queue_requests": 512, 00:28:34.103 "delay_cmd_submit": true, 00:28:34.103 "transport_retry_count": 4, 00:28:34.103 "bdev_retry_count": 3, 00:28:34.103 "transport_ack_timeout": 0, 00:28:34.103 "ctrlr_loss_timeout_sec": 0, 00:28:34.103 "reconnect_delay_sec": 0, 00:28:34.103 "fast_io_fail_timeout_sec": 0, 00:28:34.103 "disable_auto_failback": false, 00:28:34.103 "generate_uuids": false, 00:28:34.103 "transport_tos": 0, 00:28:34.103 "nvme_error_stat": false, 00:28:34.103 "rdma_srq_size": 0, 00:28:34.103 "io_path_stat": false, 00:28:34.103 "allow_accel_sequence": false, 00:28:34.103 "rdma_max_cq_size": 0, 00:28:34.103 "rdma_cm_event_timeout_ms": 0, 00:28:34.103 "dhchap_digests": [ 00:28:34.103 "sha256", 00:28:34.103 "sha384", 00:28:34.103 "sha512" 00:28:34.103 ], 00:28:34.103 "dhchap_dhgroups": [ 00:28:34.103 "null", 00:28:34.103 "ffdhe2048", 00:28:34.103 "ffdhe3072", 00:28:34.103 "ffdhe4096", 00:28:34.103 "ffdhe6144", 00:28:34.103 "ffdhe8192" 00:28:34.103 ] 00:28:34.103 } 00:28:34.103 }, 00:28:34.103 { 00:28:34.103 "method": "bdev_nvme_attach_controller", 00:28:34.103 "params": { 00:28:34.103 "name": "nvme0", 00:28:34.103 "trtype": "TCP", 00:28:34.103 "adrfam": "IPv4", 00:28:34.103 "traddr": "127.0.0.1", 00:28:34.103 "trsvcid": "4420", 00:28:34.103 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:34.103 "prchk_reftag": false, 00:28:34.103 "prchk_guard": false, 00:28:34.103 "ctrlr_loss_timeout_sec": 0, 00:28:34.103 "reconnect_delay_sec": 0, 00:28:34.103 "fast_io_fail_timeout_sec": 0, 00:28:34.103 "psk": "key0", 00:28:34.103 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:34.103 "hdgst": false, 00:28:34.103 "ddgst": false, 00:28:34.103 "multipath": "multipath" 00:28:34.103 } 00:28:34.103 }, 00:28:34.103 { 00:28:34.103 "method": "bdev_nvme_set_hotplug", 00:28:34.103 "params": { 00:28:34.103 "period_us": 100000, 00:28:34.103 "enable": false 00:28:34.103 } 00:28:34.103 }, 00:28:34.103 { 00:28:34.103 "method": "bdev_wait_for_examine" 00:28:34.103 } 00:28:34.103 ] 00:28:34.103 }, 00:28:34.103 { 00:28:34.103 "subsystem": "nbd", 00:28:34.103 "config": [] 00:28:34.103 } 00:28:34.103 ] 00:28:34.103 }' 00:28:34.103 [2024-11-19 00:12:40.788856] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:28:34.103 [2024-11-19 00:12:40.789054] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91864 ] 00:28:34.362 [2024-11-19 00:12:40.968001] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:34.621 [2024-11-19 00:12:41.054743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:34.621 [2024-11-19 00:12:41.293665] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:28:34.881 [2024-11-19 00:12:41.402551] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:28:35.141 00:12:41 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:35.141 00:12:41 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:28:35.141 00:12:41 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:28:35.141 00:12:41 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:35.141 00:12:41 keyring_file -- keyring/file.sh@121 -- # jq length 00:28:35.400 00:12:41 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:28:35.400 00:12:41 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:28:35.400 00:12:41 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:35.400 00:12:41 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:35.400 00:12:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:35.400 00:12:41 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:35.400 00:12:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:35.660 00:12:42 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:28:35.660 00:12:42 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:28:35.660 00:12:42 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:28:35.660 00:12:42 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:35.660 00:12:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:35.660 00:12:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:35.660 00:12:42 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:35.920 00:12:42 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:28:35.920 00:12:42 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:28:35.920 00:12:42 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:28:35.920 00:12:42 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:28:36.180 00:12:42 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:28:36.180 00:12:42 keyring_file -- keyring/file.sh@1 -- # cleanup 00:28:36.180 00:12:42 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.GCEEvjY9RB /tmp/tmp.9mfrwnAqO8 00:28:36.180 00:12:42 keyring_file -- keyring/file.sh@20 -- # killprocess 91864 00:28:36.180 00:12:42 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 91864 ']' 00:28:36.180 00:12:42 keyring_file -- common/autotest_common.sh@958 -- # kill -0 91864 00:28:36.180 00:12:42 keyring_file -- common/autotest_common.sh@959 -- # uname 00:28:36.180 00:12:42 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:36.180 00:12:42 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91864 00:28:36.180 00:12:42 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:36.180 00:12:42 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:36.180 killing process with pid 91864 00:28:36.180 Received shutdown signal, test time was about 1.000000 seconds 00:28:36.180 00:28:36.180 Latency(us) 00:28:36.180 [2024-11-19T00:12:42.872Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:36.180 [2024-11-19T00:12:42.872Z] =================================================================================================================== 00:28:36.180 [2024-11-19T00:12:42.872Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:28:36.180 00:12:42 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91864' 00:28:36.180 00:12:42 keyring_file -- common/autotest_common.sh@973 -- # kill 91864 00:28:36.180 00:12:42 keyring_file -- common/autotest_common.sh@978 -- # wait 91864 00:28:37.119 00:12:43 keyring_file -- keyring/file.sh@21 -- # killprocess 91594 00:28:37.119 00:12:43 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 91594 ']' 00:28:37.119 00:12:43 keyring_file -- common/autotest_common.sh@958 -- # kill -0 91594 00:28:37.119 00:12:43 keyring_file -- common/autotest_common.sh@959 -- # uname 00:28:37.119 00:12:43 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:37.119 00:12:43 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91594 00:28:37.119 killing process with pid 91594 00:28:37.119 00:12:43 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:37.119 00:12:43 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:37.119 00:12:43 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91594' 00:28:37.119 00:12:43 keyring_file -- common/autotest_common.sh@973 -- # kill 91594 00:28:37.119 00:12:43 keyring_file -- common/autotest_common.sh@978 -- # wait 91594 00:28:39.025 ************************************ 00:28:39.025 END TEST keyring_file 00:28:39.025 ************************************ 00:28:39.025 00:28:39.025 real 0m18.729s 00:28:39.025 user 0m43.784s 00:28:39.025 sys 0m2.991s 00:28:39.025 00:12:45 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:39.025 00:12:45 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:39.025 00:12:45 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:28:39.025 00:12:45 -- spdk/autotest.sh@294 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:28:39.025 00:12:45 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:39.025 00:12:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:39.025 00:12:45 -- common/autotest_common.sh@10 -- # set +x 00:28:39.025 ************************************ 00:28:39.025 START TEST keyring_linux 00:28:39.025 ************************************ 00:28:39.025 00:12:45 keyring_linux -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:28:39.025 Joined session keyring: 627591136 00:28:39.025 * Looking for test storage... 00:28:39.025 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:28:39.025 00:12:45 keyring_linux -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:39.025 00:12:45 keyring_linux -- common/autotest_common.sh@1693 -- # lcov --version 00:28:39.025 00:12:45 keyring_linux -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:39.025 00:12:45 keyring_linux -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:39.025 00:12:45 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:39.025 00:12:45 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:39.025 00:12:45 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:39.025 00:12:45 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:28:39.025 00:12:45 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:28:39.025 00:12:45 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:28:39.025 00:12:45 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:28:39.025 00:12:45 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:28:39.025 00:12:45 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:28:39.025 00:12:45 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:28:39.025 00:12:45 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:39.025 00:12:45 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:28:39.025 00:12:45 keyring_linux -- scripts/common.sh@345 -- # : 1 00:28:39.025 00:12:45 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:39.025 00:12:45 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:39.025 00:12:45 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:28:39.025 00:12:45 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:28:39.025 00:12:45 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:39.025 00:12:45 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:28:39.025 00:12:45 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:28:39.025 00:12:45 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:28:39.025 00:12:45 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:28:39.025 00:12:45 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:39.025 00:12:45 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:28:39.025 00:12:45 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:28:39.025 00:12:45 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:39.025 00:12:45 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:39.025 00:12:45 keyring_linux -- scripts/common.sh@368 -- # return 0 00:28:39.025 00:12:45 keyring_linux -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:39.025 00:12:45 keyring_linux -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:39.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:39.025 --rc genhtml_branch_coverage=1 00:28:39.025 --rc genhtml_function_coverage=1 00:28:39.025 --rc genhtml_legend=1 00:28:39.025 --rc geninfo_all_blocks=1 00:28:39.025 --rc geninfo_unexecuted_blocks=1 00:28:39.025 00:28:39.025 ' 00:28:39.025 00:12:45 keyring_linux -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:39.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:39.025 --rc genhtml_branch_coverage=1 00:28:39.025 --rc genhtml_function_coverage=1 00:28:39.025 --rc genhtml_legend=1 00:28:39.025 --rc geninfo_all_blocks=1 00:28:39.025 --rc geninfo_unexecuted_blocks=1 00:28:39.025 00:28:39.025 ' 00:28:39.025 00:12:45 keyring_linux -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:39.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:39.025 --rc genhtml_branch_coverage=1 00:28:39.025 --rc genhtml_function_coverage=1 00:28:39.025 --rc genhtml_legend=1 00:28:39.025 --rc geninfo_all_blocks=1 00:28:39.025 --rc geninfo_unexecuted_blocks=1 00:28:39.025 00:28:39.025 ' 00:28:39.025 00:12:45 keyring_linux -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:39.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:39.025 --rc genhtml_branch_coverage=1 00:28:39.025 --rc genhtml_function_coverage=1 00:28:39.025 --rc genhtml_legend=1 00:28:39.025 --rc geninfo_all_blocks=1 00:28:39.025 --rc geninfo_unexecuted_blocks=1 00:28:39.025 00:28:39.025 ' 00:28:39.025 00:12:45 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:28:39.025 00:12:45 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:39.025 00:12:45 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:28:39.025 00:12:45 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:39.025 00:12:45 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:39.025 00:12:45 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:39.025 00:12:45 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:39.025 00:12:45 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:39.025 00:12:45 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:39.025 00:12:45 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:39.025 00:12:45 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:39.025 00:12:45 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:39.025 00:12:45 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:39.025 00:12:45 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:28:39.025 00:12:45 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=2a95e0b2-fe13-4bbc-a195-13b7fe2f640a 00:28:39.025 00:12:45 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:39.025 00:12:45 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:39.025 00:12:45 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:39.025 00:12:45 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:39.025 00:12:45 keyring_linux -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:39.025 00:12:45 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:28:39.025 00:12:45 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:39.025 00:12:45 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:39.026 00:12:45 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:39.026 00:12:45 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:39.026 00:12:45 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:39.026 00:12:45 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:39.026 00:12:45 keyring_linux -- paths/export.sh@5 -- # export PATH 00:28:39.026 00:12:45 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:39.026 00:12:45 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:28:39.026 00:12:45 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:39.026 00:12:45 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:39.026 00:12:45 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:39.026 00:12:45 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:39.026 00:12:45 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:39.026 00:12:45 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:39.026 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:39.026 00:12:45 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:39.026 00:12:45 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:39.026 00:12:45 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:39.026 00:12:45 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:28:39.026 00:12:45 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:28:39.026 00:12:45 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:28:39.026 00:12:45 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:28:39.026 00:12:45 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:28:39.026 00:12:45 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:28:39.026 00:12:45 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:28:39.026 00:12:45 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:28:39.026 00:12:45 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:28:39.026 00:12:45 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:28:39.026 00:12:45 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:28:39.026 00:12:45 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:28:39.026 00:12:45 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:28:39.026 00:12:45 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:28:39.026 00:12:45 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:28:39.026 00:12:45 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:28:39.026 00:12:45 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:28:39.026 00:12:45 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:28:39.026 00:12:45 keyring_linux -- nvmf/common.sh@733 -- # python - 00:28:39.026 00:12:45 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:28:39.026 00:12:45 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:28:39.026 /tmp/:spdk-test:key0 00:28:39.026 00:12:45 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:28:39.026 00:12:45 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:28:39.026 00:12:45 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:28:39.026 00:12:45 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:28:39.026 00:12:45 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:28:39.026 00:12:45 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:28:39.026 00:12:45 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:28:39.026 00:12:45 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:28:39.026 00:12:45 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:28:39.026 00:12:45 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:28:39.026 00:12:45 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:28:39.026 00:12:45 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:28:39.026 00:12:45 keyring_linux -- nvmf/common.sh@733 -- # python - 00:28:39.285 00:12:45 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:28:39.285 /tmp/:spdk-test:key1 00:28:39.285 00:12:45 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:28:39.285 00:12:45 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=92010 00:28:39.285 00:12:45 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:39.285 00:12:45 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 92010 00:28:39.285 00:12:45 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 92010 ']' 00:28:39.285 00:12:45 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:39.285 00:12:45 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:39.285 00:12:45 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:39.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:39.285 00:12:45 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:39.285 00:12:45 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:28:39.285 [2024-11-19 00:12:45.867104] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:28:39.285 [2024-11-19 00:12:45.867643] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92010 ] 00:28:39.543 [2024-11-19 00:12:46.045087] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:39.543 [2024-11-19 00:12:46.126251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:39.802 [2024-11-19 00:12:46.313674] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:28:40.370 00:12:46 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:40.370 00:12:46 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:28:40.370 00:12:46 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:28:40.370 00:12:46 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.370 00:12:46 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:28:40.370 [2024-11-19 00:12:46.765877] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:40.370 null0 00:28:40.370 [2024-11-19 00:12:46.797862] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:28:40.370 [2024-11-19 00:12:46.798134] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:28:40.370 00:12:46 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.370 00:12:46 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:28:40.370 533744751 00:28:40.370 00:12:46 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:28:40.370 406447483 00:28:40.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:40.370 00:12:46 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=92027 00:28:40.370 00:12:46 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:28:40.370 00:12:46 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 92027 /var/tmp/bperf.sock 00:28:40.370 00:12:46 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 92027 ']' 00:28:40.370 00:12:46 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:40.370 00:12:46 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:40.370 00:12:46 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:40.370 00:12:46 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:40.370 00:12:46 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:28:40.370 [2024-11-19 00:12:46.933724] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:28:40.370 [2024-11-19 00:12:46.934100] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92027 ] 00:28:40.630 [2024-11-19 00:12:47.118679] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:40.630 [2024-11-19 00:12:47.237477] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:41.567 00:12:47 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:41.567 00:12:47 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:28:41.567 00:12:47 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:28:41.567 00:12:47 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:28:41.567 00:12:48 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:28:41.567 00:12:48 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:42.136 [2024-11-19 00:12:48.548214] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:28:42.136 00:12:48 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:28:42.136 00:12:48 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:28:42.395 [2024-11-19 00:12:48.864133] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:28:42.395 nvme0n1 00:28:42.395 00:12:48 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:28:42.395 00:12:48 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:28:42.395 00:12:48 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:28:42.395 00:12:48 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:28:42.395 00:12:48 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:42.395 00:12:48 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:28:42.656 00:12:49 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:28:42.656 00:12:49 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:28:42.656 00:12:49 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:28:42.656 00:12:49 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:28:42.656 00:12:49 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:42.656 00:12:49 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:42.656 00:12:49 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:28:42.916 00:12:49 keyring_linux -- keyring/linux.sh@25 -- # sn=533744751 00:28:42.916 00:12:49 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:28:42.916 00:12:49 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:28:42.916 00:12:49 keyring_linux -- keyring/linux.sh@26 -- # [[ 533744751 == \5\3\3\7\4\4\7\5\1 ]] 00:28:42.916 00:12:49 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 533744751 00:28:42.916 00:12:49 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:28:42.916 00:12:49 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:43.176 Running I/O for 1 seconds... 00:28:44.114 10212.00 IOPS, 39.89 MiB/s 00:28:44.114 Latency(us) 00:28:44.114 [2024-11-19T00:12:50.806Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:44.114 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:44.114 nvme0n1 : 1.01 10229.61 39.96 0.00 0.00 12439.21 6732.33 19303.33 00:28:44.114 [2024-11-19T00:12:50.806Z] =================================================================================================================== 00:28:44.114 [2024-11-19T00:12:50.806Z] Total : 10229.61 39.96 0.00 0.00 12439.21 6732.33 19303.33 00:28:44.114 { 00:28:44.114 "results": [ 00:28:44.114 { 00:28:44.114 "job": "nvme0n1", 00:28:44.114 "core_mask": "0x2", 00:28:44.114 "workload": "randread", 00:28:44.114 "status": "finished", 00:28:44.114 "queue_depth": 128, 00:28:44.114 "io_size": 4096, 00:28:44.114 "runtime": 1.010889, 00:28:44.114 "iops": 10229.609779115215, 00:28:44.114 "mibps": 39.95941319966881, 00:28:44.114 "io_failed": 0, 00:28:44.114 "io_timeout": 0, 00:28:44.114 "avg_latency_us": 12439.211107770481, 00:28:44.114 "min_latency_us": 6732.334545454545, 00:28:44.114 "max_latency_us": 19303.33090909091 00:28:44.114 } 00:28:44.114 ], 00:28:44.114 "core_count": 1 00:28:44.114 } 00:28:44.115 00:12:50 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:28:44.115 00:12:50 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:28:44.374 00:12:50 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:28:44.374 00:12:50 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:28:44.374 00:12:50 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:28:44.374 00:12:50 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:28:44.374 00:12:50 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:28:44.374 00:12:50 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:44.634 00:12:51 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:28:44.634 00:12:51 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:28:44.634 00:12:51 keyring_linux -- keyring/linux.sh@23 -- # return 00:28:44.634 00:12:51 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:28:44.634 00:12:51 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:28:44.634 00:12:51 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:28:44.634 00:12:51 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:28:44.634 00:12:51 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:44.634 00:12:51 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:28:44.634 00:12:51 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:44.634 00:12:51 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:28:44.634 00:12:51 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:28:44.893 [2024-11-19 00:12:51.515942] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:28:44.893 [2024-11-19 00:12:51.516218] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000030280 (107): Transport endpoint is not connected 00:28:44.893 [2024-11-19 00:12:51.517188] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000030280 (9): Bad file descriptor 00:28:44.893 [2024-11-19 00:12:51.518179] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:28:44.893 [2024-11-19 00:12:51.518221] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:28:44.893 [2024-11-19 00:12:51.518274] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:28:44.893 [2024-11-19 00:12:51.518303] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:28:44.893 request: 00:28:44.893 { 00:28:44.893 "name": "nvme0", 00:28:44.893 "trtype": "tcp", 00:28:44.893 "traddr": "127.0.0.1", 00:28:44.893 "adrfam": "ipv4", 00:28:44.893 "trsvcid": "4420", 00:28:44.893 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:44.893 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:44.893 "prchk_reftag": false, 00:28:44.893 "prchk_guard": false, 00:28:44.893 "hdgst": false, 00:28:44.893 "ddgst": false, 00:28:44.893 "psk": ":spdk-test:key1", 00:28:44.893 "allow_unrecognized_csi": false, 00:28:44.893 "method": "bdev_nvme_attach_controller", 00:28:44.893 "req_id": 1 00:28:44.893 } 00:28:44.893 Got JSON-RPC error response 00:28:44.893 response: 00:28:44.893 { 00:28:44.893 "code": -5, 00:28:44.893 "message": "Input/output error" 00:28:44.893 } 00:28:44.893 00:12:51 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:28:44.893 00:12:51 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:44.893 00:12:51 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:44.893 00:12:51 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:44.893 00:12:51 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:28:44.893 00:12:51 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:28:44.893 00:12:51 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:28:44.893 00:12:51 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:28:44.893 00:12:51 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:28:44.893 00:12:51 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:28:44.893 00:12:51 keyring_linux -- keyring/linux.sh@33 -- # sn=533744751 00:28:44.893 00:12:51 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 533744751 00:28:44.893 1 links removed 00:28:44.893 00:12:51 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:28:44.893 00:12:51 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:28:44.893 00:12:51 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:28:44.893 00:12:51 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:28:44.893 00:12:51 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:28:44.893 00:12:51 keyring_linux -- keyring/linux.sh@33 -- # sn=406447483 00:28:44.893 00:12:51 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 406447483 00:28:44.893 1 links removed 00:28:44.893 00:12:51 keyring_linux -- keyring/linux.sh@41 -- # killprocess 92027 00:28:44.893 00:12:51 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 92027 ']' 00:28:44.894 00:12:51 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 92027 00:28:44.894 00:12:51 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:28:44.894 00:12:51 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:44.894 00:12:51 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 92027 00:28:45.153 00:12:51 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:45.153 killing process with pid 92027 00:28:45.153 Received shutdown signal, test time was about 1.000000 seconds 00:28:45.153 00:28:45.153 Latency(us) 00:28:45.153 [2024-11-19T00:12:51.845Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:45.153 [2024-11-19T00:12:51.845Z] =================================================================================================================== 00:28:45.153 [2024-11-19T00:12:51.845Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:45.153 00:12:51 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:45.153 00:12:51 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 92027' 00:28:45.153 00:12:51 keyring_linux -- common/autotest_common.sh@973 -- # kill 92027 00:28:45.153 00:12:51 keyring_linux -- common/autotest_common.sh@978 -- # wait 92027 00:28:45.722 00:12:52 keyring_linux -- keyring/linux.sh@42 -- # killprocess 92010 00:28:45.722 00:12:52 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 92010 ']' 00:28:45.722 00:12:52 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 92010 00:28:45.723 00:12:52 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:28:45.723 00:12:52 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:45.723 00:12:52 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 92010 00:28:45.723 killing process with pid 92010 00:28:45.723 00:12:52 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:45.723 00:12:52 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:45.723 00:12:52 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 92010' 00:28:45.723 00:12:52 keyring_linux -- common/autotest_common.sh@973 -- # kill 92010 00:28:45.723 00:12:52 keyring_linux -- common/autotest_common.sh@978 -- # wait 92010 00:28:47.631 ************************************ 00:28:47.631 END TEST keyring_linux 00:28:47.631 ************************************ 00:28:47.631 00:28:47.631 real 0m8.715s 00:28:47.631 user 0m15.786s 00:28:47.631 sys 0m1.595s 00:28:47.631 00:12:54 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:47.631 00:12:54 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:28:47.631 00:12:54 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:28:47.631 00:12:54 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:28:47.631 00:12:54 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:28:47.631 00:12:54 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:28:47.631 00:12:54 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:28:47.631 00:12:54 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:28:47.631 00:12:54 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:28:47.631 00:12:54 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:28:47.631 00:12:54 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:28:47.631 00:12:54 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:28:47.631 00:12:54 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:28:47.631 00:12:54 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:28:47.631 00:12:54 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:28:47.631 00:12:54 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:28:47.631 00:12:54 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:28:47.631 00:12:54 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:28:47.631 00:12:54 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:28:47.631 00:12:54 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:47.631 00:12:54 -- common/autotest_common.sh@10 -- # set +x 00:28:47.631 00:12:54 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:28:47.631 00:12:54 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:28:47.631 00:12:54 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:28:47.631 00:12:54 -- common/autotest_common.sh@10 -- # set +x 00:28:49.538 INFO: APP EXITING 00:28:49.538 INFO: killing all VMs 00:28:49.538 INFO: killing vhost app 00:28:49.538 INFO: EXIT DONE 00:28:50.108 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:50.108 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:28:50.108 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:28:50.676 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:50.677 Cleaning 00:28:50.677 Removing: /var/run/dpdk/spdk0/config 00:28:50.677 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:28:50.677 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:28:50.677 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:28:50.677 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:28:50.677 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:28:50.677 Removing: /var/run/dpdk/spdk0/hugepage_info 00:28:50.677 Removing: /var/run/dpdk/spdk1/config 00:28:50.677 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:28:50.677 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:28:50.677 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:28:50.677 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:28:50.677 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:28:50.677 Removing: /var/run/dpdk/spdk1/hugepage_info 00:28:50.677 Removing: /var/run/dpdk/spdk2/config 00:28:50.677 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:28:50.677 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:28:50.677 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:28:50.677 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:28:50.677 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:28:50.677 Removing: /var/run/dpdk/spdk2/hugepage_info 00:28:50.677 Removing: /var/run/dpdk/spdk3/config 00:28:50.677 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:28:50.677 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:28:50.677 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:28:50.677 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:28:50.677 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:28:50.677 Removing: /var/run/dpdk/spdk3/hugepage_info 00:28:50.677 Removing: /var/run/dpdk/spdk4/config 00:28:50.677 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:28:50.677 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:28:50.677 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:28:50.677 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:28:50.677 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:28:50.677 Removing: /var/run/dpdk/spdk4/hugepage_info 00:28:50.677 Removing: /dev/shm/nvmf_trace.0 00:28:50.677 Removing: /dev/shm/spdk_tgt_trace.pid57459 00:28:50.937 Removing: /var/run/dpdk/spdk0 00:28:50.937 Removing: /var/run/dpdk/spdk1 00:28:50.937 Removing: /var/run/dpdk/spdk2 00:28:50.937 Removing: /var/run/dpdk/spdk3 00:28:50.937 Removing: /var/run/dpdk/spdk4 00:28:50.937 Removing: /var/run/dpdk/spdk_pid57246 00:28:50.937 Removing: /var/run/dpdk/spdk_pid57459 00:28:50.937 Removing: /var/run/dpdk/spdk_pid57683 00:28:50.937 Removing: /var/run/dpdk/spdk_pid57781 00:28:50.937 Removing: /var/run/dpdk/spdk_pid57826 00:28:50.937 Removing: /var/run/dpdk/spdk_pid57954 00:28:50.937 Removing: /var/run/dpdk/spdk_pid57972 00:28:50.937 Removing: /var/run/dpdk/spdk_pid58131 00:28:50.937 Removing: /var/run/dpdk/spdk_pid58340 00:28:50.937 Removing: /var/run/dpdk/spdk_pid58506 00:28:50.937 Removing: /var/run/dpdk/spdk_pid58599 00:28:50.937 Removing: /var/run/dpdk/spdk_pid58695 00:28:50.937 Removing: /var/run/dpdk/spdk_pid58817 00:28:50.937 Removing: /var/run/dpdk/spdk_pid58914 00:28:50.937 Removing: /var/run/dpdk/spdk_pid58948 00:28:50.937 Removing: /var/run/dpdk/spdk_pid58990 00:28:50.937 Removing: /var/run/dpdk/spdk_pid59061 00:28:50.937 Removing: /var/run/dpdk/spdk_pid59172 00:28:50.937 Removing: /var/run/dpdk/spdk_pid59631 00:28:50.937 Removing: /var/run/dpdk/spdk_pid59701 00:28:50.937 Removing: /var/run/dpdk/spdk_pid59767 00:28:50.937 Removing: /var/run/dpdk/spdk_pid59783 00:28:50.937 Removing: /var/run/dpdk/spdk_pid59907 00:28:50.937 Removing: /var/run/dpdk/spdk_pid59923 00:28:50.937 Removing: /var/run/dpdk/spdk_pid60049 00:28:50.937 Removing: /var/run/dpdk/spdk_pid60066 00:28:50.937 Removing: /var/run/dpdk/spdk_pid60125 00:28:50.937 Removing: /var/run/dpdk/spdk_pid60148 00:28:50.937 Removing: /var/run/dpdk/spdk_pid60207 00:28:50.937 Removing: /var/run/dpdk/spdk_pid60225 00:28:50.937 Removing: /var/run/dpdk/spdk_pid60401 00:28:50.937 Removing: /var/run/dpdk/spdk_pid60438 00:28:50.937 Removing: /var/run/dpdk/spdk_pid60527 00:28:50.937 Removing: /var/run/dpdk/spdk_pid60873 00:28:50.937 Removing: /var/run/dpdk/spdk_pid60897 00:28:50.937 Removing: /var/run/dpdk/spdk_pid60940 00:28:50.937 Removing: /var/run/dpdk/spdk_pid60960 00:28:50.937 Removing: /var/run/dpdk/spdk_pid60992 00:28:50.937 Removing: /var/run/dpdk/spdk_pid61019 00:28:50.937 Removing: /var/run/dpdk/spdk_pid61044 00:28:50.937 Removing: /var/run/dpdk/spdk_pid61066 00:28:50.937 Removing: /var/run/dpdk/spdk_pid61097 00:28:50.937 Removing: /var/run/dpdk/spdk_pid61123 00:28:50.937 Removing: /var/run/dpdk/spdk_pid61153 00:28:50.937 Removing: /var/run/dpdk/spdk_pid61183 00:28:50.937 Removing: /var/run/dpdk/spdk_pid61204 00:28:50.937 Removing: /var/run/dpdk/spdk_pid61232 00:28:50.937 Removing: /var/run/dpdk/spdk_pid61263 00:28:50.937 Removing: /var/run/dpdk/spdk_pid61283 00:28:50.937 Removing: /var/run/dpdk/spdk_pid61316 00:28:50.937 Removing: /var/run/dpdk/spdk_pid61347 00:28:50.937 Removing: /var/run/dpdk/spdk_pid61367 00:28:50.937 Removing: /var/run/dpdk/spdk_pid61398 00:28:50.937 Removing: /var/run/dpdk/spdk_pid61437 00:28:50.937 Removing: /var/run/dpdk/spdk_pid61468 00:28:50.937 Removing: /var/run/dpdk/spdk_pid61504 00:28:50.937 Removing: /var/run/dpdk/spdk_pid61588 00:28:50.937 Removing: /var/run/dpdk/spdk_pid61623 00:28:50.937 Removing: /var/run/dpdk/spdk_pid61650 00:28:50.937 Removing: /var/run/dpdk/spdk_pid61696 00:28:50.937 Removing: /var/run/dpdk/spdk_pid61718 00:28:50.937 Removing: /var/run/dpdk/spdk_pid61737 00:28:50.937 Removing: /var/run/dpdk/spdk_pid61797 00:28:50.937 Removing: /var/run/dpdk/spdk_pid61823 00:28:50.937 Removing: /var/run/dpdk/spdk_pid61863 00:28:50.937 Removing: /var/run/dpdk/spdk_pid61885 00:28:50.937 Removing: /var/run/dpdk/spdk_pid61901 00:28:50.937 Removing: /var/run/dpdk/spdk_pid61922 00:28:50.937 Removing: /var/run/dpdk/spdk_pid61944 00:28:50.937 Removing: /var/run/dpdk/spdk_pid61960 00:28:50.937 Removing: /var/run/dpdk/spdk_pid61987 00:28:50.937 Removing: /var/run/dpdk/spdk_pid62003 00:28:50.937 Removing: /var/run/dpdk/spdk_pid62049 00:28:50.937 Removing: /var/run/dpdk/spdk_pid62082 00:28:50.937 Removing: /var/run/dpdk/spdk_pid62098 00:28:50.937 Removing: /var/run/dpdk/spdk_pid62144 00:28:50.937 Removing: /var/run/dpdk/spdk_pid62160 00:28:50.937 Removing: /var/run/dpdk/spdk_pid62180 00:28:50.937 Removing: /var/run/dpdk/spdk_pid62232 00:28:50.937 Removing: /var/run/dpdk/spdk_pid62256 00:28:50.937 Removing: /var/run/dpdk/spdk_pid62294 00:28:50.937 Removing: /var/run/dpdk/spdk_pid62308 00:28:50.937 Removing: /var/run/dpdk/spdk_pid62328 00:28:50.937 Removing: /var/run/dpdk/spdk_pid62347 00:28:50.937 Removing: /var/run/dpdk/spdk_pid62367 00:28:50.937 Removing: /var/run/dpdk/spdk_pid62381 00:28:51.198 Removing: /var/run/dpdk/spdk_pid62406 00:28:51.198 Removing: /var/run/dpdk/spdk_pid62420 00:28:51.198 Removing: /var/run/dpdk/spdk_pid62514 00:28:51.198 Removing: /var/run/dpdk/spdk_pid62601 00:28:51.198 Removing: /var/run/dpdk/spdk_pid62748 00:28:51.198 Removing: /var/run/dpdk/spdk_pid62793 00:28:51.198 Removing: /var/run/dpdk/spdk_pid62850 00:28:51.198 Removing: /var/run/dpdk/spdk_pid62877 00:28:51.198 Removing: /var/run/dpdk/spdk_pid62911 00:28:51.198 Removing: /var/run/dpdk/spdk_pid62932 00:28:51.198 Removing: /var/run/dpdk/spdk_pid62981 00:28:51.198 Removing: /var/run/dpdk/spdk_pid63003 00:28:51.198 Removing: /var/run/dpdk/spdk_pid63093 00:28:51.198 Removing: /var/run/dpdk/spdk_pid63132 00:28:51.198 Removing: /var/run/dpdk/spdk_pid63205 00:28:51.198 Removing: /var/run/dpdk/spdk_pid63306 00:28:51.198 Removing: /var/run/dpdk/spdk_pid63403 00:28:51.198 Removing: /var/run/dpdk/spdk_pid63453 00:28:51.198 Removing: /var/run/dpdk/spdk_pid63572 00:28:51.198 Removing: /var/run/dpdk/spdk_pid63627 00:28:51.198 Removing: /var/run/dpdk/spdk_pid63671 00:28:51.198 Removing: /var/run/dpdk/spdk_pid63921 00:28:51.198 Removing: /var/run/dpdk/spdk_pid64034 00:28:51.198 Removing: /var/run/dpdk/spdk_pid64079 00:28:51.198 Removing: /var/run/dpdk/spdk_pid64105 00:28:51.198 Removing: /var/run/dpdk/spdk_pid64155 00:28:51.198 Removing: /var/run/dpdk/spdk_pid64196 00:28:51.198 Removing: /var/run/dpdk/spdk_pid64242 00:28:51.198 Removing: /var/run/dpdk/spdk_pid64287 00:28:51.198 Removing: /var/run/dpdk/spdk_pid64688 00:28:51.198 Removing: /var/run/dpdk/spdk_pid64727 00:28:51.198 Removing: /var/run/dpdk/spdk_pid65110 00:28:51.198 Removing: /var/run/dpdk/spdk_pid65595 00:28:51.198 Removing: /var/run/dpdk/spdk_pid65875 00:28:51.198 Removing: /var/run/dpdk/spdk_pid66800 00:28:51.198 Removing: /var/run/dpdk/spdk_pid67762 00:28:51.198 Removing: /var/run/dpdk/spdk_pid67887 00:28:51.198 Removing: /var/run/dpdk/spdk_pid67962 00:28:51.198 Removing: /var/run/dpdk/spdk_pid69440 00:28:51.198 Removing: /var/run/dpdk/spdk_pid69803 00:28:51.198 Removing: /var/run/dpdk/spdk_pid73503 00:28:51.198 Removing: /var/run/dpdk/spdk_pid73914 00:28:51.198 Removing: /var/run/dpdk/spdk_pid74033 00:28:51.198 Removing: /var/run/dpdk/spdk_pid74175 00:28:51.198 Removing: /var/run/dpdk/spdk_pid74210 00:28:51.198 Removing: /var/run/dpdk/spdk_pid74257 00:28:51.198 Removing: /var/run/dpdk/spdk_pid74299 00:28:51.198 Removing: /var/run/dpdk/spdk_pid74413 00:28:51.198 Removing: /var/run/dpdk/spdk_pid74563 00:28:51.198 Removing: /var/run/dpdk/spdk_pid74752 00:28:51.198 Removing: /var/run/dpdk/spdk_pid74852 00:28:51.198 Removing: /var/run/dpdk/spdk_pid75065 00:28:51.198 Removing: /var/run/dpdk/spdk_pid75167 00:28:51.198 Removing: /var/run/dpdk/spdk_pid75276 00:28:51.198 Removing: /var/run/dpdk/spdk_pid75653 00:28:51.198 Removing: /var/run/dpdk/spdk_pid76088 00:28:51.198 Removing: /var/run/dpdk/spdk_pid76089 00:28:51.198 Removing: /var/run/dpdk/spdk_pid76090 00:28:51.198 Removing: /var/run/dpdk/spdk_pid76377 00:28:51.198 Removing: /var/run/dpdk/spdk_pid76658 00:28:51.198 Removing: /var/run/dpdk/spdk_pid76668 00:28:51.198 Removing: /var/run/dpdk/spdk_pid79024 00:28:51.198 Removing: /var/run/dpdk/spdk_pid79449 00:28:51.198 Removing: /var/run/dpdk/spdk_pid79452 00:28:51.198 Removing: /var/run/dpdk/spdk_pid79796 00:28:51.198 Removing: /var/run/dpdk/spdk_pid79811 00:28:51.198 Removing: /var/run/dpdk/spdk_pid79826 00:28:51.198 Removing: /var/run/dpdk/spdk_pid79866 00:28:51.198 Removing: /var/run/dpdk/spdk_pid79878 00:28:51.198 Removing: /var/run/dpdk/spdk_pid79969 00:28:51.198 Removing: /var/run/dpdk/spdk_pid79972 00:28:51.198 Removing: /var/run/dpdk/spdk_pid80081 00:28:51.198 Removing: /var/run/dpdk/spdk_pid80090 00:28:51.198 Removing: /var/run/dpdk/spdk_pid80200 00:28:51.198 Removing: /var/run/dpdk/spdk_pid80203 00:28:51.198 Removing: /var/run/dpdk/spdk_pid80652 00:28:51.198 Removing: /var/run/dpdk/spdk_pid80694 00:28:51.198 Removing: /var/run/dpdk/spdk_pid80802 00:28:51.198 Removing: /var/run/dpdk/spdk_pid80871 00:28:51.198 Removing: /var/run/dpdk/spdk_pid81245 00:28:51.198 Removing: /var/run/dpdk/spdk_pid81448 00:28:51.198 Removing: /var/run/dpdk/spdk_pid81904 00:28:51.198 Removing: /var/run/dpdk/spdk_pid82468 00:28:51.198 Removing: /var/run/dpdk/spdk_pid83347 00:28:51.198 Removing: /var/run/dpdk/spdk_pid83997 00:28:51.198 Removing: /var/run/dpdk/spdk_pid84007 00:28:51.198 Removing: /var/run/dpdk/spdk_pid86036 00:28:51.198 Removing: /var/run/dpdk/spdk_pid86103 00:28:51.458 Removing: /var/run/dpdk/spdk_pid86171 00:28:51.458 Removing: /var/run/dpdk/spdk_pid86238 00:28:51.458 Removing: /var/run/dpdk/spdk_pid86373 00:28:51.458 Removing: /var/run/dpdk/spdk_pid86440 00:28:51.458 Removing: /var/run/dpdk/spdk_pid86502 00:28:51.458 Removing: /var/run/dpdk/spdk_pid86565 00:28:51.458 Removing: /var/run/dpdk/spdk_pid86960 00:28:51.458 Removing: /var/run/dpdk/spdk_pid88190 00:28:51.458 Removing: /var/run/dpdk/spdk_pid88338 00:28:51.458 Removing: /var/run/dpdk/spdk_pid88583 00:28:51.458 Removing: /var/run/dpdk/spdk_pid89197 00:28:51.458 Removing: /var/run/dpdk/spdk_pid89357 00:28:51.458 Removing: /var/run/dpdk/spdk_pid89518 00:28:51.458 Removing: /var/run/dpdk/spdk_pid89619 00:28:51.458 Removing: /var/run/dpdk/spdk_pid89779 00:28:51.458 Removing: /var/run/dpdk/spdk_pid89894 00:28:51.458 Removing: /var/run/dpdk/spdk_pid90618 00:28:51.458 Removing: /var/run/dpdk/spdk_pid90654 00:28:51.458 Removing: /var/run/dpdk/spdk_pid90691 00:28:51.458 Removing: /var/run/dpdk/spdk_pid91048 00:28:51.458 Removing: /var/run/dpdk/spdk_pid91085 00:28:51.458 Removing: /var/run/dpdk/spdk_pid91121 00:28:51.458 Removing: /var/run/dpdk/spdk_pid91594 00:28:51.458 Removing: /var/run/dpdk/spdk_pid91611 00:28:51.458 Removing: /var/run/dpdk/spdk_pid91864 00:28:51.458 Removing: /var/run/dpdk/spdk_pid92010 00:28:51.458 Removing: /var/run/dpdk/spdk_pid92027 00:28:51.458 Clean 00:28:51.458 00:12:58 -- common/autotest_common.sh@1453 -- # return 0 00:28:51.458 00:12:58 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:28:51.458 00:12:58 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:51.458 00:12:58 -- common/autotest_common.sh@10 -- # set +x 00:28:51.458 00:12:58 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:28:51.458 00:12:58 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:51.458 00:12:58 -- common/autotest_common.sh@10 -- # set +x 00:28:51.458 00:12:58 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:28:51.458 00:12:58 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:28:51.458 00:12:58 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:28:51.458 00:12:58 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:28:51.458 00:12:58 -- spdk/autotest.sh@398 -- # hostname 00:28:51.458 00:12:58 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:28:51.716 geninfo: WARNING: invalid characters removed from testname! 00:29:18.302 00:13:21 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:18.302 00:13:24 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:20.838 00:13:27 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:23.389 00:13:29 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:26.005 00:13:32 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:28.540 00:13:34 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:30.444 00:13:37 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:29:30.444 00:13:37 -- spdk/autorun.sh@1 -- $ timing_finish 00:29:30.444 00:13:37 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:29:30.444 00:13:37 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:29:30.444 00:13:37 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:29:30.444 00:13:37 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:29:30.703 + [[ -n 5251 ]] 00:29:30.703 + sudo kill 5251 00:29:30.713 [Pipeline] } 00:29:30.732 [Pipeline] // timeout 00:29:30.737 [Pipeline] } 00:29:30.755 [Pipeline] // stage 00:29:30.760 [Pipeline] } 00:29:30.777 [Pipeline] // catchError 00:29:30.788 [Pipeline] stage 00:29:30.790 [Pipeline] { (Stop VM) 00:29:30.804 [Pipeline] sh 00:29:31.087 + vagrant halt 00:29:33.622 ==> default: Halting domain... 00:29:40.201 [Pipeline] sh 00:29:40.482 + vagrant destroy -f 00:29:43.016 ==> default: Removing domain... 00:29:43.288 [Pipeline] sh 00:29:43.569 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/output 00:29:43.578 [Pipeline] } 00:29:43.596 [Pipeline] // stage 00:29:43.602 [Pipeline] } 00:29:43.618 [Pipeline] // dir 00:29:43.624 [Pipeline] } 00:29:43.641 [Pipeline] // wrap 00:29:43.647 [Pipeline] } 00:29:43.662 [Pipeline] // catchError 00:29:43.672 [Pipeline] stage 00:29:43.675 [Pipeline] { (Epilogue) 00:29:43.689 [Pipeline] sh 00:29:43.976 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:29:49.263 [Pipeline] catchError 00:29:49.265 [Pipeline] { 00:29:49.278 [Pipeline] sh 00:29:49.560 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:29:49.819 Artifacts sizes are good 00:29:49.828 [Pipeline] } 00:29:49.841 [Pipeline] // catchError 00:29:49.852 [Pipeline] archiveArtifacts 00:29:49.858 Archiving artifacts 00:29:49.983 [Pipeline] cleanWs 00:29:49.994 [WS-CLEANUP] Deleting project workspace... 00:29:49.994 [WS-CLEANUP] Deferred wipeout is used... 00:29:50.001 [WS-CLEANUP] done 00:29:50.003 [Pipeline] } 00:29:50.017 [Pipeline] // stage 00:29:50.023 [Pipeline] } 00:29:50.035 [Pipeline] // node 00:29:50.040 [Pipeline] End of Pipeline 00:29:50.090 Finished: SUCCESS